[jira] [Created] (FLINK-24192) Sql get plan failed. All the inputs have relevant nodes, however the cost is still infinite

2021-09-07 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-24192:
--

 Summary: Sql get plan failed. All the inputs have relevant nodes, 
however the cost is still infinite
 Key: FLINK-24192
 URL: https://issues.apache.org/jira/browse/FLINK-24192
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.15.0
Reporter: xiaojin.wy
 Fix For: 1.15.0


*sql*

{code:java}
CREATE TABLE database5_t0(
`c0` FLOAT , `c1` FLOAT , `c2` CHAR
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath00'
)
CREATE TABLE database5_t1(
`c0` TINYINT , `c1` INTEGER
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)
CREATE TABLE database5_t2 (
  `c0` FLOAT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)
CREATE TABLE database5_t3 (
  `c0` STRING , `c1` STRING
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)
INSERT INTO database5_t0(c0, c1, c2) VALUES(cast(0.84355265 as FLOAT), 
cast(0.3269016 as FLOAT), cast('' as CHAR))
INSERT INTO database5_t1(c0, c1) VALUES(cast(-125 as TINYINT), -1715936454)
INSERT INTO database5_t2(c0) VALUES(cast(-1.7159365 as FLOAT))
INSERT INTO database5_t3(c0, c1) VALUES('16:36:29', '1969-12-12')
INSERT INTO MySink
SELECT COUNT(ref0) from (SELECT COUNT(1) AS ref0 FROM database5_t0, 
database5_t3, database5_t1, database5_t2 WHERE CAST ( EXISTS (SELECT 1) AS 
BOOLEAN)
UNION ALL
SELECT COUNT(1) AS ref0 FROM database5_t0, database5_t3, database5_t1, 
database5_t2
WHERE CAST ((NOT CAST (( EXISTS (SELECT 1)) AS BOOLEAN)) AS BOOLEAN)
UNION ALL
SELECT COUNT(1) AS ref0 FROM database5_t0, database5_t3, database5_t1, 
database5_t2 WHERE CAST ((CAST ( EXISTS (SELECT 1) AS BOOLEAN)) IS NULL AS 
BOOLEAN)) as table1
{code}
After excite the sql in it case, we get the error like this:

{code:java}
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query: 

FlinkLogicalSink(table=[default_catalog.default_database.MySink], fields=[a])
+- FlinkLogicalCalc(select=[CAST(EXPR$0) AS a])
   +- FlinkLogicalAggregate(group=[{}], EXPR$0=[COUNT()])
  +- FlinkLogicalUnion(all=[true])
 :- FlinkLogicalUnion(all=[true])
 :  :- FlinkLogicalCalc(select=[0 AS $f0])
 :  :  +- FlinkLogicalAggregate(group=[{}], ref0=[COUNT()])
 :  : +- FlinkLogicalJoin(condition=[$1], joinType=[semi])
 :  ::- FlinkLogicalCalc(select=[c0])
 :  ::  +- FlinkLogicalJoin(condition=[true], joinType=[inner])
 :  :: :- FlinkLogicalCalc(select=[c0])
 :  :: :  +- FlinkLogicalJoin(condition=[true], 
joinType=[inner])
 :  :: : :- FlinkLogicalCalc(select=[c0])
 :  :: : :  +- FlinkLogicalJoin(condition=[true], 
joinType=[inner])
 :  :: : : :- 
FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
database5_t0, project=[c0]]], fields=[c0])
 :  :: : : +- 
FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
database5_t3, project=[c0]]], fields=[c0])
 :  :: : +- 
FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
database5_t1, project=[c0]]], fields=[c0])
 :  :: +- 
FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
database5_t2]], fields=[c0])
 :  :+- FlinkLogicalCalc(select=[IS NOT NULL(m) AS $f0])
 :  :   +- FlinkLogicalAggregate(group=[{}], m=[MIN($0)])
 :  :  +- FlinkLogicalCalc(select=[true AS i])
 :  : +- FlinkLogicalValues(tuples=[[{ 0 }]])
 :  +- FlinkLogicalCalc(select=[0 AS $f0])
 : +- FlinkLogicalAggregate(group=[{}], ref0=[COUNT()])
 :+- FlinkLogicalJoin(condition=[$1], joinType=[anti])
 :   :- FlinkLogicalCalc(select=[c0])
 :   :  +- FlinkLogicalJoin(condition=[true], joinType=[inner])
 :   : :- FlinkLogicalCalc(select=[c0])
 :   : :  +- FlinkLogicalJoin(condition=[true], 
joinType=[inner])
 :   : : :- FlinkLogicalCalc(select=[c0])
 :   : : :  +- FlinkLogicalJoin(condition=[true], 
joinType=[inner])
 :   : : : :- 
FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
database5_t0, project=[c0]]], fields=[c0])
 :   : : : +- 
FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
database5_t3, project=[c0]]], fields=[c0])
 :   : : +- 
FlinkLogicalTableSourceScan(table=[[default_catalog, default_database, 
database5_t1, project=[c0]]], fields=[c0])
 :   :   

[jira] [Updated] (FLINK-23602) org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalData

2021-08-06 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23602:
---
Description: 
{code:java}
CREATE TABLE database5_t2 (
  `c0` DECIMAL , `c1` BIGINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)
INSERT OVERWRITE database5_t2(c0, c1) VALUES(-120229892, 790169221), 
(-1070424438, -1787215649)
SELECT COUNT(CAST ((database5_t2.c0) BETWEEN ((REVERSE(CAST ('1969-12-08' AS 
STRING  AND
(('-727278084') IN (CAST (database5_t2.c0 AS STRING), '0.9996987230442536')) AS 
DOUBLE )) AS ref0
FROM database5_t2 GROUP BY database5_t2.c1  ORDER BY database5_t2.c1
{code}

Running the sql above, will generate the error of this:


{code:java}
java.util.concurrent.ExecutionException: 
org.apache.flink.table.api.TableException: Failed to wait job finish

at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
at 
org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:482)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
Caused by: org.apache.flink.table.api.TableException: Failed to wait job finish
at 
org.apache.flink.table.api.internal.InsertResultIterator.hasNext(InsertResultIterator.java:56)
at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.isFirstRowReady(TableResultImpl.java:383)
at 
org.apache.flink.table.api.internal.TableResultImpl.lambda$awaitInternal$1(TableResultImpl.java:116)
at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: 

[jira] [Closed] (FLINK-23641) A bad data should not stop and fail the job

2021-08-05 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy closed FLINK-23641.
--
Resolution: Won't Fix

> A bad data should not stop and fail the job
> ---
>
> Key: FLINK-23641
> URL: https://issues.apache.org/jira/browse/FLINK-23641
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>
> {code:java}
> CREATE TABLE database5_t2 (
>   `c0` DECIMAL
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> CREATE TABLE database5_t3 (
>   `c0` STRING , `c1` INTEGER , `c2` STRING , `c3` BIGINT
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> INSERT OVERWRITE database5_t2(c0) VALUES(1969075679)
> INSERT OVERWRITE database5_t3(c0, c1, c2, c3) VALUES ('yaW鉒', -943510659, 
> '1970-01-20 09:49:24', 1941473165), ('2#融', 1174376063, '1969-12-21 
> 09:54:49', 1941473165), ('R>t 蹿', 1648164266, '1969-12-14 14:20:28', 
> 1222780269)
> SELECT MAX(CAST (IS_DIGIT(1837249903) AS DOUBLE )) AS ref0 FROM database5_t2, 
> database5_t3
> WHERE CAST ((database5_t3.c1) BETWEEN ((COSH(CAST ((-(CAST (database5_t3.c0 
> AS DOUBLE ))) AS DOUBLE 
> AND ((LN(CAST (351648321 AS DOUBLE  AS BOOLEAN) GROUP BY database5_t2.c0 
> ORDER BY database5_t2.c0
> {code}
> Running the case you will find the error:
> {code:java}
> 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
>   at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:317)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>   at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>   at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
> Caused by: 

[jira] [Updated] (FLINK-23642) A result of SELECT (not (CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST ((SHA1(CAST ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL should return true

2021-08-05 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23642:
---
Description: 
The sql below we get returns a 'false' value

{code:java}
SELECT (not (CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST 
((SHA1(CAST ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL
{code}

But the sql below returns a 'true'.
{code:java}
SELECT ((CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST ((SHA1(CAST 
('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL
{code}


 I think the two sql`s result is both true. Because SELECT NULL IS NULL return 
true, and SELECT (NOT NULL) IS NULL  return true.


  was:

The sql below we get returns a 'false' value

{code:java}
SELECT (not (CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST 
((SHA1(CAST ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL
{code}

But the sql below returns a 'true',  I think the two sql`s result is both true. 
Because SELECT NULL IS NULL return true, and SELECT (NOT NULL) IS NULL  return 
true.



> A result of SELECT (not (CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR 
> (CAST ((SHA1(CAST ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL 
> should return true
> 
>
> Key: FLINK-23642
> URL: https://issues.apache.org/jira/browse/FLINK-23642
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>
> The sql below we get returns a 'false' value
> {code:java}
> SELECT (not (CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST 
> ((SHA1(CAST ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL
> {code}
> But the sql below returns a 'true'.
> {code:java}
> SELECT ((CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST ((SHA1(CAST 
> ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL
> {code}
>  I think the two sql`s result is both true. Because SELECT NULL IS NULL 
> return true, and SELECT (NOT NULL) IS NULL  return true.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23642) A result of SELECT (not (CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST ((SHA1(CAST ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL should return true

2021-08-05 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23642:
--

 Summary: A result of SELECT (not (CAST (((CAST 
(IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST ((SHA1(CAST ('鲇T' AS STRING ))) AS 
BOOLEAN))) AS BOOLEAN))) IS NULL should return true
 Key: FLINK-23642
 URL: https://issues.apache.org/jira/browse/FLINK-23642
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: xiaojin.wy



The sql below we get returns a 'false' value

{code:java}
SELECT (not (CAST (((CAST (IS_ALPHA(-421305765) AS BOOLEAN)) OR (CAST 
((SHA1(CAST ('鲇T' AS STRING ))) AS BOOLEAN))) AS BOOLEAN))) IS NULL
{code}

But the sql below returns a 'true',  I think the two sql`s result is both true. 
Because SELECT NULL IS NULL return true, and SELECT (NOT NULL) IS NULL  return 
true.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23641) A bad data should not stop and fail the job

2021-08-05 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23641:
--

 Summary: A bad data should not stop and fail the job
 Key: FLINK-23641
 URL: https://issues.apache.org/jira/browse/FLINK-23641
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: xiaojin.wy



{code:java}
CREATE TABLE database5_t2 (
  `c0` DECIMAL
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)
CREATE TABLE database5_t3 (
  `c0` STRING , `c1` INTEGER , `c2` STRING , `c3` BIGINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)

INSERT OVERWRITE database5_t2(c0) VALUES(1969075679)
INSERT OVERWRITE database5_t3(c0, c1, c2, c3) VALUES ('yaW鉒', -943510659, 
'1970-01-20 09:49:24', 1941473165), ('2#融', 1174376063, '1969-12-21 09:54:49', 
1941473165), ('R>t 蹿', 1648164266, '1969-12-14 14:20:28', 1222780269)

SELECT MAX(CAST (IS_DIGIT(1837249903) AS DOUBLE )) AS ref0 FROM database5_t2, 
database5_t3
WHERE CAST ((database5_t3.c1) BETWEEN ((COSH(CAST ((-(CAST (database5_t3.c0 AS 
DOUBLE ))) AS DOUBLE 
AND ((LN(CAST (351648321 AS DOUBLE  AS BOOLEAN) GROUP BY database5_t2.c0 
ORDER BY database5_t2.c0
{code}


Running the case you will find the error:


{code:java}

java.util.concurrent.ExecutionException: 
org.apache.flink.table.api.TableException: Failed to wait job finish

at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
at 
org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
Caused by: org.apache.flink.table.api.TableException: Failed to wait job finish
at 
org.apache.flink.table.api.internal.InsertResultIterator.hasNext(InsertResultIterator.java:56)
at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
at 

[jira] [Closed] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"

2021-08-04 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy closed FLINK-23609.
--
Resolution: Won't Fix

> Codegen error of "Infinite or NaN  at 
> java.math.BigDecimal.(BigDecimal.java:898)"
> ---
>
> Key: FLINK-23609
> URL: https://issues.apache.org/jira/browse/FLINK-23609
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>
> {code:java}
> CREATE TABLE database5_t2 (
>   `c0` DECIMAL
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> CREATE TABLE database5_t3 (
>   `c0` STRING , `c1` INTEGER , `c2` STRING , `c3` BIGINT
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> INSERT OVERWRITE database5_t2(c0) VALUES(1969075679)
> INSERT OVERWRITE database5_t3(c0, c1, c2, c3) VALUES ('yaW鉒', -943510659, 
> '1970-01-20 09:49:24', 1941473165), ('2#融', 1174376063, '1969-12-21 
> 09:54:49', 1941473165), ('R>t 蹿', 1648164266, '1969-12-14 14:20:28', 
> 1222780269)
> SELECT MAX(CAST (IS_DIGIT(1837249903) AS DOUBLE )) AS ref0 FROM database5_t2, 
> database5_t3
> WHERE CAST ((database5_t3.c1) BETWEEN ((COSH(CAST ((-(CAST (database5_t3.c0 
> AS DOUBLE ))) AS DOUBLE 
> AND ((LN(CAST (-351648321 AS DOUBLE  AS BOOLEAN) GROUP BY database5_t2.c0 
> ORDER BY database5_t2.c0
> {code}
> Running the sql above, you will get the error:
> {code:java}
> java.lang.NumberFormatException: Infinite or NaN
>   at java.math.BigDecimal.(BigDecimal.java:898)
>   at java.math.BigDecimal.(BigDecimal.java:875)
>   at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at 
> 

[jira] [Commented] (FLINK-23602) org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalDa

2021-08-04 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17393015#comment-17393015
 ] 

xiaojin.wy commented on FLINK-23602:


I don't have the ability to assign people, maybe [~jark] can help you.

> org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No 
> applicable constructor/method found for actual parameters 
> "org.apache.flink.table.data.DecimalData
> -
>
> Key: FLINK-23602
> URL: https://issues.apache.org/jira/browse/FLINK-23602
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> CREATE TABLE database5_t2 (
>   `c0` DECIMAL , `c1` BIGINT
> ) WITH (
>   'connector' = 'filesystem',
>   'format' = 'testcsv',
>   'path' = '$resultPath33'
> )
> INSERT OVERWRITE database5_t2(c0, c1) VALUES(-120229892, 790169221), 
> (-1070424438, -1787215649)
> SELECT COUNT(CAST ((database5_t2.c0) BETWEEN ((REVERSE(CAST ('1969-12-08' AS 
> STRING  AND
> (('-727278084') IN (database5_t2.c0, '0.9996987230442536')) AS DOUBLE )) AS 
> ref0
> FROM database5_t2 GROUP BY database5_t2.c1  ORDER BY database5_t2.c1
> {code}
> Running the sql above, will generate the error of this:
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
>   at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:482)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>   at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>   at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
> Caused by: org.apache.flink.table.api.TableException: Failed to wait job 

[jira] [Updated] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"

2021-08-03 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23609:
---
Description: 
{code:java}
CREATE TABLE database5_t2 (
  `c0` DECIMAL
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)
CREATE TABLE database5_t3 (
  `c0` STRING , `c1` INTEGER , `c2` STRING , `c3` BIGINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)

INSERT OVERWRITE database5_t2(c0) VALUES(1969075679)
INSERT OVERWRITE database5_t3(c0, c1, c2, c3) VALUES ('yaW鉒', -943510659, 
'1970-01-20 09:49:24', 1941473165), ('2#融', 1174376063, '1969-12-21 09:54:49', 
1941473165), ('R>t 蹿', 1648164266, '1969-12-14 14:20:28', 1222780269)

SELECT MAX(CAST (IS_DIGIT(1837249903) AS DOUBLE )) AS ref0 FROM database5_t2, 
database5_t3
WHERE CAST ((database5_t3.c1) BETWEEN ((COSH(CAST ((-(CAST (database5_t3.c0 AS 
DOUBLE ))) AS DOUBLE 
AND ((LN(CAST (-351648321 AS DOUBLE  AS BOOLEAN) GROUP BY database5_t2.c0 
ORDER BY database5_t2.c0
{code}

Running the sql above, you will get the error:


{code:java}
java.lang.NumberFormatException: Infinite or NaN

at java.math.BigDecimal.(BigDecimal.java:898)
at java.math.BigDecimal.(BigDecimal.java:875)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
 

[jira] [Updated] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"

2021-08-03 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23609:
---
Summary: Codegen error of "Infinite or NaN  at 
java.math.BigDecimal.(BigDecimal.java:898)"  (was: Codeine error of 
"Infinite or NaN  at java.math.BigDecimal.(BigDecimal.java:898)")

> Codegen error of "Infinite or NaN  at 
> java.math.BigDecimal.(BigDecimal.java:898)"
> ---
>
> Key: FLINK-23609
> URL: https://issues.apache.org/jira/browse/FLINK-23609
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
> Environment: java.lang.NumberFormatException: Infinite or NaN
>   at java.math.BigDecimal.(BigDecimal.java:898)
>   at java.math.BigDecimal.(BigDecimal.java:875)
>   at 
> org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
>   at 
> org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
>   at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
>   at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
>   at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
>   at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:781)
>   at 
> 

[jira] [Updated] (FLINK-23609) Codegen error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"

2021-08-03 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23609:
---
Environment: (was: java.lang.NumberFormatException: Infinite or NaN

at java.math.BigDecimal.(BigDecimal.java:898)
at java.math.BigDecimal.(BigDecimal.java:875)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:781)
at 
org.apache.flink.table.planner.utils.TestingStatementSet.execute(TableTestBase.scala:1509)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

[jira] [Created] (FLINK-23609) Codeine error of "Infinite or NaN at java.math.BigDecimal.(BigDecimal.java:898)"

2021-08-03 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23609:
--

 Summary: Codeine error of "Infinite or NaN  at 
java.math.BigDecimal.(BigDecimal.java:898)"
 Key: FLINK-23609
 URL: https://issues.apache.org/jira/browse/FLINK-23609
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
 Environment: java.lang.NumberFormatException: Infinite or NaN

at java.math.BigDecimal.(BigDecimal.java:898)
at java.math.BigDecimal.(BigDecimal.java:875)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:202)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$FilterReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:152)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:282)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1702)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:781)
at 
org.apache.flink.table.planner.utils.TestingStatementSet.execute(TableTestBase.scala:1509)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:317)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Created] (FLINK-23602) org.codehaus.commons.compiler.CompileException: Line 84, Column 78: No applicable constructor/method found for actual parameters "org.apache.flink.table.data.DecimalData

2021-08-03 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23602:
--

 Summary: org.codehaus.commons.compiler.CompileException: Line 84, 
Column 78: No applicable constructor/method found for actual parameters 
"org.apache.flink.table.data.DecimalData
 Key: FLINK-23602
 URL: https://issues.apache.org/jira/browse/FLINK-23602
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: xiaojin.wy



{code:java}
CREATE TABLE database5_t2 (
  `c0` DECIMAL , `c1` BIGINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath33'
)
INSERT OVERWRITE database5_t2(c0, c1) VALUES(-120229892, 790169221), 
(-1070424438, -1787215649)
SELECT COUNT(CAST ((database5_t2.c0) BETWEEN ((REVERSE(CAST ('1969-12-08' AS 
STRING  AND
(('-727278084') IN (database5_t2.c0, '0.9996987230442536')) AS DOUBLE )) AS ref0
FROM database5_t2 GROUP BY database5_t2.c1  ORDER BY database5_t2.c1
{code}

Running the sql above, will generate the error of this:


{code:java}
java.util.concurrent.ExecutionException: 
org.apache.flink.table.api.TableException: Failed to wait job finish

at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
at 
org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:482)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)
Caused by: org.apache.flink.table.api.TableException: Failed to wait job finish
at 
org.apache.flink.table.api.internal.InsertResultIterator.hasNext(InsertResultIterator.java:56)
at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
at 
org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.isFirstRowReady(TableResultImpl.java:383)
at 
org.apache.flink.table.api.internal.TableResultImpl.lambda$awaitInternal$1(TableResultImpl.java:116)
at 

[jira] [Created] (FLINK-23579) SELECT SHA2(NULL, CAST(NULL AS INT)) AS ref0 can't compile

2021-08-02 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23579:
--

 Summary: SELECT SHA2(NULL, CAST(NULL AS INT)) AS ref0 can't compile
 Key: FLINK-23579
 URL: https://issues.apache.org/jira/browse/FLINK-23579
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: xiaojin.wy
 Fix For: 1.14.0


Running the sql of SELECT SHA2(NULL, CAST(NULL AS INT)) AS ref0 will get the 
error below:
java.lang.RuntimeException: Could not instantiate generated class 
'ExpressionReducer$5'

at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:108)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:306)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:93)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:307)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:169)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1718)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:803)
at 
org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:125)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:487)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-16 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database5_t1.csv',
 'format' = 'csv'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (CAST 
((NOT CAST ((database5_t1.c0) AS BOOLEAN))) AS  SMALLINT))= (database5_t1.c0)
{code}


After excuting the sql above, you will get the errors:



{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-15 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database5_t1.csv',
 'format' = 'csv'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


After excuting the sql above, you will get the errors:



{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Comment Edited] (FLINK-23288) Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will generate CodeGenException

2021-07-09 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377900#comment-17377900
 ] 

xiaojin.wy edited comment on FLINK-23288 at 7/9/21, 7:56 AM:
-

But if a user want to join two tables' data into one table, which the table's 
column is DOUBLE, and one table data is 1.378593404E9, one table is 
0.6047707965147558, it can not work.  And other engine can support it. So , I 
think we also should support it to improve the use experience.
[~TsReaper] 


was (Author: xiaojin.wy):
But if a user want to join two tables' data into one table, which the table's 
column is DOUBLE, and one table data is 1.378593404E9, one table is 
0.6047707965147558, how can it work? So , I think we should support it.
[~TsReaper] 

> Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will 
> generate CodeGenException
> --
>
> Key: FLINK-23288
> URL: https://issues.apache.org/jira/browse/FLINK-23288
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>
> {code:sql}
> CREATE TABLE database5_t0(
>  `c0` DOUBLE , `c1` INTEGER , `c2` STRING
>  ) WITH (
>  'connector' = 'filesystem',
>  'path' = 'hdfs:///tmp/database5_t0.csv', 
>  'format' = 'csv'
>  )
>  INSERT OVERWRITE database5_t0(c0, c1, c2) VALUES(1.378593404E9, 1336919677, 
> '1969-12-31 20:29:41'), (0.6047707965147558, 1336919677, '1970-01-06 
> 03:36:50')
> {code}
> *After excuting the sql above, will generate this errors, but mysql,pg,sqlite 
> won`t have the error:*
> {code}
> org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types 
> of expression and result type. 
> Expression[GeneratedExpression(((org.apache.flink.table.data.DecimalData) 
> decimal$3),false,,DECIMAL(17, 16) NOT NULL,Some(0.6047707965147558))] type is 
> [DECIMAL(17, 16) NOT NULL], result type is [DOUBLE NOT NULL]
> at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:312)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:300)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:300)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:256)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:44)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:43)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:50)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:58)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:81)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:80)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at 

[jira] [Comment Edited] (FLINK-23288) Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will generate CodeGenException

2021-07-09 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377900#comment-17377900
 ] 

xiaojin.wy edited comment on FLINK-23288 at 7/9/21, 7:54 AM:
-

But if a user want to join two tables' data into one table, which the table's 
column is DOUBLE, and one table data is 1.378593404E9, one table is 
0.6047707965147558, how can it work? So , I think we should support it.
[~TsReaper] 


was (Author: xiaojin.wy):
But if a user want to join two tables' data into one table, which the table's 
column is DOUBLE, and one tables data is 1.378593404E9, one table is 
0.6047707965147558, how can it work? So ,I think we should support it.
[~TsReaper] 

> Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will 
> generate CodeGenException
> --
>
> Key: FLINK-23288
> URL: https://issues.apache.org/jira/browse/FLINK-23288
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>
> {code:sql}
> CREATE TABLE database5_t0(
>  `c0` DOUBLE , `c1` INTEGER , `c2` STRING
>  ) WITH (
>  'connector' = 'filesystem',
>  'path' = 'hdfs:///tmp/database5_t0.csv', 
>  'format' = 'csv'
>  )
>  INSERT OVERWRITE database5_t0(c0, c1, c2) VALUES(1.378593404E9, 1336919677, 
> '1969-12-31 20:29:41'), (0.6047707965147558, 1336919677, '1970-01-06 
> 03:36:50')
> {code}
> *After excuting the sql above, will generate this errors, but mysql,pg,sqlite 
> won`t have the error:*
> {code}
> org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types 
> of expression and result type. 
> Expression[GeneratedExpression(((org.apache.flink.table.data.DecimalData) 
> decimal$3),false,,DECIMAL(17, 16) NOT NULL,Some(0.6047707965147558))] type is 
> [DECIMAL(17, 16) NOT NULL], result type is [DOUBLE NOT NULL]
> at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:312)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:300)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:300)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:256)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:44)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:43)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:50)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:58)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:81)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:80)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at 

[jira] [Commented] (FLINK-23288) Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will generate CodeGenException

2021-07-09 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377900#comment-17377900
 ] 

xiaojin.wy commented on FLINK-23288:


But if a user want to join two tables' data into one table, which the table's 
column is DOUBLE, and one tables data is 1.378593404E9, one table is 
0.6047707965147558, how can it work? So ,I think we should support it.
[~TsReaper] 

> Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will 
> generate CodeGenException
> --
>
> Key: FLINK-23288
> URL: https://issues.apache.org/jira/browse/FLINK-23288
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>
> {code:sql}
> CREATE TABLE database5_t0(
>  `c0` DOUBLE , `c1` INTEGER , `c2` STRING
>  ) WITH (
>  'connector' = 'filesystem',
>  'path' = 'hdfs:///tmp/database5_t0.csv', 
>  'format' = 'csv'
>  )
>  INSERT OVERWRITE database5_t0(c0, c1, c2) VALUES(1.378593404E9, 1336919677, 
> '1969-12-31 20:29:41'), (0.6047707965147558, 1336919677, '1970-01-06 
> 03:36:50')
> {code}
> *After excuting the sql above, will generate this errors, but mysql,pg,sqlite 
> won`t have the error:*
> {code}
> org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types 
> of expression and result type. 
> Expression[GeneratedExpression(((org.apache.flink.table.data.DecimalData) 
> decimal$3),false,,DECIMAL(17, 16) NOT NULL,Some(0.6047707965147558))] type is 
> [DECIMAL(17, 16) NOT NULL], result type is [DOUBLE NOT NULL]
> at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:312)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:300)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:300)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:256)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:44)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:43)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
>  at 
> org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:50)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:58)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:81)
>  at 
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:80)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


*After excuting the sql above, you will get the errors:
*


{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


After excuting the sql above, you will get the errors:



{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 
{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


**After excuting the sql above, you will get the errors:
**


{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Updated] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23303:
---
Description: 

{code:java}
CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))
{code}


*After excuting the sql above, you will get the errors:
*


{code:java}
java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 

[jira] [Created] (FLINK-23303) org.apache.calcite.rex.RexLiteral cannot be cast to org.apache.calcite.rex.RexCall

2021-07-07 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23303:
--

 Summary: org.apache.calcite.rex.RexLiteral cannot be cast to 
org.apache.calcite.rex.RexCall
 Key: FLINK-23303
 URL: https://issues.apache.org/jira/browse/FLINK-23303
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: xiaojin.wy


CREATE TABLE database5_t1(
`c0` SMALLINT , `c1` INTEGER , `c2` SMALLINT
) WITH (
 'connector' = 'filesystem',
 'format' = 'testcsv',
 'path' = '$resultPath11'
)

INSERT INTO database5_t1(c0, c1, c2) VALUES(cast(-21957 as SMALLINT), 
1094690065, cast(16917 as SMALLINT))

SELECT database5_t1.c0 AS ref0 FROM database5_t1 WHERE (FALSE) NOT IN (((NOT 
CAST ((database5_t1.c0) AS BOOLEAN))) = (database5_t1.c0))

*After excuting the sql above, you will get the errors:
*

java.lang.ClassCastException: org.apache.calcite.rex.RexLiteral cannot be cast 
to org.apache.calcite.rex.RexCall

at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:478)
at 
org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter.visitCall(RexNodeExtractor.scala:367)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:138)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$$anonfun$extractConjunctiveConditions$1.apply(RexNodeExtractor.scala:137)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor$.extractConjunctiveConditions(RexNodeExtractor.scala:137)
at 
org.apache.flink.table.planner.plan.utils.RexNodeExtractor.extractConjunctiveConditions(RexNodeExtractor.scala)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.extractPredicates(PushFilterIntoSourceScanRuleBase.java:145)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:81)
at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:60)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1.apply(FlinkGroupProgram.scala:55)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 

[jira] [Created] (FLINK-23290) cast '(LZ *3' as boolean, get a null

2021-07-07 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23290:
--

 Summary: cast '(LZ *3' as boolean, get a null
 Key: FLINK-23290
 URL: https://issues.apache.org/jira/browse/FLINK-23290
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: xiaojin.wy


CREATE TABLE database5_t0(
c0 VARCHAR , c1 BIGINT 
) WITH (
 'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database5_t0.csv',
 'format' = 'csv'
);
INSERT OVERWRITE database5_t0(c0, c1) VALUES('(LZ *3', 2135917226)
SELECT database5_t0.c0 AS ref0 FROM database5_t0 WHERE CAST (database5_t0.c0 AS 
BOOLEAN)
*After excuting that, you will get the error:
*
Caused by: java.lang.NullPointerException
at BatchExecCalc$20.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:101)
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:82)
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:319)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:414)
at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:104)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:62)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:269)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23288) Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will generate CodeGenException

2021-07-06 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23288:
---
Description: 
CREATE TABLE database5_t0(
`c0` DOUBLE , `c1` INTEGER , `c2` STRING
) WITH (
 'connector' = 'filesystem',
'path' = 'hdfs:///tmp/database5_t0.csv',   
 'format' = 'csv'
)
INSERT OVERWRITE database5_t0(c0, c1, c2) VALUES(1.378593404E9, 1336919677, 
'1969-12-31 20:29:41'), (0.6047707965147558, 1336919677, '1970-01-06 03:36:50')

*After excuting the sql above, will generate this errors, but mysql,pg,sqlite 
won`t have the error:*

org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. 
Expression[GeneratedExpression(((org.apache.flink.table.data.DecimalData) 
decimal$3),false,,DECIMAL(17, 16) NOT NULL,Some(0.6047707965147558))] type is 
[DECIMAL(17, 16) NOT NULL], result type is [DOUBLE NOT NULL]

at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:312)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:300)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:300)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:256)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:44)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:43)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:50)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247)
at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:58)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
at 
org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:81)
at 
org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:80)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:80)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:174)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1658)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:747)
at 
org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:100)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:483)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 

[jira] [Created] (FLINK-23288) Inserting (1.378593404E9) (0.6047707965147558) to a double type, it will generate CodeGenException

2021-07-06 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23288:
--

 Summary: Inserting (1.378593404E9) (0.6047707965147558) to a 
double type, it will generate CodeGenException
 Key: FLINK-23288
 URL: https://issues.apache.org/jira/browse/FLINK-23288
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: xiaojin.wy


CREATE TABLE database5_t0(
`c0` DOUBLE , `c1` INTEGER , `c2` STRING
) WITH (
 'connector' = 'filesystem',
'path' = 'hdfs:///tmp/database5_t0.csv',   
 'format' = 'csv'
)
INSERT OVERWRITE database5_t0(c0, c1, c2) VALUES(1.378593404E9, 1336919677, 
'1969-12-31 20:29:41'), (0.6047707965147558, 1336919677, '1970-01-06 03:36:50')

*After excuting that, will generate this errors, but mysql,pg,sqlite won`t have 
the error:*

org.apache.flink.table.planner.codegen.CodeGenException: Incompatible types of 
expression and result type. 
Expression[GeneratedExpression(((org.apache.flink.table.data.DecimalData) 
decimal$3),false,,DECIMAL(17, 16) NOT NULL,Some(0.6047707965147558))] type is 
[DECIMAL(17, 16) NOT NULL], result type is [DOUBLE NOT NULL]

at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:312)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator$$anonfun$generateResultExpression$1.apply(ExprCodeGenerator.scala:300)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:300)
at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateResultExpression(ExprCodeGenerator.scala:256)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:44)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$$anonfun$1.apply(ValuesCodeGenerator.scala:43)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator$.generatorInputFormat(ValuesCodeGenerator.scala:43)
at 
org.apache.flink.table.planner.codegen.ValuesCodeGenerator.generatorInputFormat(ValuesCodeGenerator.scala)
at 
org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecValues.translateToPlanInternal(CommonExecValues.java:50)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:247)
at 
org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecSink.translateToPlanInternal(BatchExecSink.java:58)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:171)
at 
org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:81)
at 
org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:80)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:80)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:174)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1658)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:747)
at 

[jira] [Updated] (FLINK-23271) RuntimeException: while resolving method 'booleanValue' in class class java.math.BigDecimal

2021-07-06 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23271:
---
Description: 
*Sql :*

CREATE TABLE database3_t0(
c0 DECIMAL , c1 SMALLINT
) WITH (
 'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database3_t0.csv',
 'format' = 'csv' 
);
INSERT OVERWRITE database8_t0(c0, c1) VALUES(2113554022, cast(-22975 as 
SMALLINT)), (1570419395, cast(-26858 as SMALLINT)), (-1569861129, cast(-20143 
as SMALLINT));
SELECT database8_t0.c0 AS ref0 FROM database8_t0 WHERE CAST 
(0.10915913549909961 AS BOOLEAN;

*After excuting the sql, you will find the error:*


java.lang.RuntimeException: while resolving method 'booleanValue' in class 
class java.math.BigDecimal

at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:424)
at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:435)
at 
org.apache.calcite.linq4j.tree.Expressions.unbox(Expressions.java:1453)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:398)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:326)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCast(RexToLixTranslator.java:538)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$CastImplementor.implementSafe(RexImpTable.java:2450)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:2894)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:2859)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:1084)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:970)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:237)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:231)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:818)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:198)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:90)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:66)
at 
org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:128)
at 
org.apache.calcite.rex.RexSimplify.simplifyCast(RexSimplify.java:2101)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:326)
at 
org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:287)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:262)
at 
org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:224)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 

[jira] [Updated] (FLINK-23271) RuntimeException: while resolving method 'booleanValue' in class class java.math.BigDecimal

2021-07-06 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23271:
---
Description: 
*-- sql --*

CREATE TABLE database3_t0(
c0 DECIMAL , c1 SMALLINT
) WITH (
 'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database3_t0.csv',
 'format' = 'csv' 
);
INSERT OVERWRITE database8_t0(c0, c1) VALUES(2113554022, cast(-22975 as 
SMALLINT)), (1570419395, cast(-26858 as SMALLINT)), (-1569861129, cast(-20143 
as SMALLINT));
SELECT database8_t0.c0 AS ref0 FROM database8_t0 WHERE CAST 
(0.10915913549909961 AS BOOLEAN;

After excuting the sql, you will find the error:
java.lang.RuntimeException: while resolving method 'booleanValue' in class 
class java.math.BigDecimal

at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:424)
at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:435)
at 
org.apache.calcite.linq4j.tree.Expressions.unbox(Expressions.java:1453)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:398)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:326)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCast(RexToLixTranslator.java:538)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$CastImplementor.implementSafe(RexImpTable.java:2450)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:2894)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:2859)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:1084)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:970)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:237)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:231)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:818)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:198)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:90)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:66)
at 
org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:128)
at 
org.apache.calcite.rex.RexSimplify.simplifyCast(RexSimplify.java:2101)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:326)
at 
org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:287)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:262)
at 
org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:224)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 

[jira] [Updated] (FLINK-23271) RuntimeException: while resolving method 'booleanValue' in class class java.math.BigDecimal

2021-07-06 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23271:
---
Description: 
-- *sql *--

CREATE TABLE database3_t0(
c0 DECIMAL , c1 SMALLINT
) WITH (
 'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database3_t0.csv',
 'format' = 'csv' 
);
INSERT OVERWRITE database8_t0(c0, c1) VALUES(2113554022, cast(-22975 as 
SMALLINT)), (1570419395, cast(-26858 as SMALLINT)), (-1569861129, cast(-20143 
as SMALLINT));
SELECT database8_t0.c0 AS ref0 FROM database8_t0 WHERE CAST 
(0.10915913549909961 AS BOOLEAN;

After excuting the sql, you will find the error:
java.lang.RuntimeException: while resolving method 'booleanValue' in class 
class java.math.BigDecimal

at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:424)
at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:435)
at 
org.apache.calcite.linq4j.tree.Expressions.unbox(Expressions.java:1453)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:398)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:326)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCast(RexToLixTranslator.java:538)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$CastImplementor.implementSafe(RexImpTable.java:2450)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:2894)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:2859)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:1084)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:970)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:237)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:231)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:818)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:198)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:90)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:66)
at 
org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:128)
at 
org.apache.calcite.rex.RexSimplify.simplifyCast(RexSimplify.java:2101)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:326)
at 
org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:287)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:262)
at 
org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:224)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 

[jira] [Updated] (FLINK-23271) RuntimeException: while resolving method 'booleanValue' in class class java.math.BigDecimal

2021-07-06 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23271:
---
Description: 
*-- sql --*
CREATE TABLE database3_t0(
c0 DECIMAL , c1 SMALLINT
) WITH (
 'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database3_t0.csv',
 'format' = 'csv' 
);
INSERT OVERWRITE database8_t0(c0, c1) VALUES(2113554022, cast(-22975 as 
SMALLINT)), (1570419395, cast(-26858 as SMALLINT)), (-1569861129, cast(-20143 
as SMALLINT));
SELECT database8_t0.c0 AS ref0 FROM database8_t0 WHERE CAST 
(0.10915913549909961 AS BOOLEAN;

After excuting the sql, you will find the error:
java.lang.RuntimeException: while resolving method 'booleanValue' in class 
class java.math.BigDecimal

at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:424)
at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:435)
at 
org.apache.calcite.linq4j.tree.Expressions.unbox(Expressions.java:1453)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:398)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:326)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCast(RexToLixTranslator.java:538)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$CastImplementor.implementSafe(RexImpTable.java:2450)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:2894)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:2859)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:1084)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:970)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:237)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:231)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:818)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:198)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:90)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:66)
at 
org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:128)
at 
org.apache.calcite.rex.RexSimplify.simplifyCast(RexSimplify.java:2101)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:326)
at 
org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:287)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:262)
at 
org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:224)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:60)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 

[jira] [Created] (FLINK-23271) RuntimeException: while resolving method 'booleanValue' in class class java.math.BigDecimal

2021-07-06 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23271:
--

 Summary: RuntimeException: while resolving method 'booleanValue' 
in class class java.math.BigDecimal
 Key: FLINK-23271
 URL: https://issues.apache.org/jira/browse/FLINK-23271
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: xiaojin.wy


*--sql--*
CREATE TABLE database3_t0(
c0 DECIMAL , c1 SMALLINT
) WITH (
 'connector' = 'filesystem',
 'path' = 'hdfs:///tmp/database3_t0.csv',
 'format' = 'csv' 
);
INSERT OVERWRITE database8_t0(c0, c1) VALUES(2113554022, cast(-22975 as 
SMALLINT)), (1570419395, cast(-26858 as SMALLINT)), (-1569861129, cast(-20143 
as SMALLINT));
SELECT database8_t0.c0 AS ref0 FROM database8_t0 WHERE CAST 
(0.10915913549909961 AS BOOLEAN;

After excuting the sql, you will find the error:
java.lang.RuntimeException: while resolving method 'booleanValue' in class 
class java.math.BigDecimal

at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:424)
at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:435)
at 
org.apache.calcite.linq4j.tree.Expressions.unbox(Expressions.java:1453)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:398)
at 
org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:326)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCast(RexToLixTranslator.java:538)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$CastImplementor.implementSafe(RexImpTable.java:2450)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:2894)
at 
org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:2859)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:1084)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:970)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:90)
at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:237)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:231)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:818)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:198)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:90)
at 
org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:66)
at 
org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:128)
at 
org.apache.calcite.rex.RexSimplify.simplifyCast(RexSimplify.java:2101)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:326)
at 
org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:287)
at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:262)
at 
org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:224)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63)
at 
org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkGroupProgram$$anonfun$optimize$1$$anonfun$apply$1.apply(FlinkGroupProgram.scala:63)
at 

[jira] [Created] (FLINK-23188) Unsupported function definition: IFNULL. Only user defined functions are supported as inline functions

2021-06-29 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23188:
--

 Summary: Unsupported function definition: IFNULL. Only user 
defined functions are supported as inline functions
 Key: FLINK-23188
 URL: https://issues.apache.org/jira/browse/FLINK-23188
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: xiaojin.wy


CREATE TABLE database0_t0(
c0 FLOAT
) WITH (
  'connector' = 'filesystem',
  'path' = 'hdfs:///tmp/database0_t0.csv',
  'format' = 'csv'
);
INSERT OVERWRITE database0_t0(c0) VALUES(0.40445197);

SELECT database0_t0.c0 AS ref0 FROM database0_t0 WHERE 
((IFNULL(database0_t0.c1, database0_t0.c1)) IS NULL);

The errors:
"(BridgingSqlFunction.java:76)
  at 
org.apache.flink.table.planner.functions.bridging.BridgingSqlFunction.of(BridgingSqlFunction.java:116)
  at 
org.apache.flink.table.planner.expressions.converter.FunctionDefinitionConvertRule.convert(FunctionDefinitionConvertRule.java:65)
  at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter.visit(ExpressionConverter.java:97)
  at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter.visit(ExpressionConverter.java:71)
  at 
org.apache.flink.table.expressions.CallExpression.accept(CallExpression.java:134)
  at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter$1.toRexNode(ExpressionConverter.java:247)
  at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)  
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)  
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)  at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)  
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)  
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)  at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)  at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter.toRexNodes(ExpressionConverter.java:240)
  at 
org.apache.flink.table.planner.expressions.converter.DirectConvertRule.lambda$convert$0(DirectConvertRule.java:220)
  at java.util.Optional.map(Optional.java:215)  at 
org.apache.flink.table.planner.expressions.converter.DirectConvertRule.convert(DirectConvertRule.java:217)
  at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter.visit(ExpressionConverter.java:97)
  at 
org.apache.flink.table.planner.expressions.converter.ExpressionConverter.visit(ExpressionConverter.java:71)
  at 
org.apache.flink.table.expressions.CallExpression.accept(CallExpression.java:134)
  at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.lambda$convertExpressionToRexNode$0(PushFilterIntoSourceScanRuleBase.java:73)
  at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)  
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)  
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)  at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)  
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)  
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)  at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)  at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.convertExpressionToRexNode(PushFilterIntoSourceScanRuleBase.java:73)
  at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoSourceScanRuleBase.resolveFiltersAndCreateTableSourceTable(PushFilterIntoSourceScanRuleBase.java:116)
  at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.pushFilterIntoScan(PushFilterIntoTableSourceScanRule.java:95)
  at 
org.apache.flink.table.planner.plan.rules.logical.PushFilterIntoTableSourceScanRule.onMatch(PushFilterIntoTableSourceScanRule.java:70)
  at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
  at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)  at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)  at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)  
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
  at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) 
 at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)  at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
  at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
  at 

[jira] [Updated] (FLINK-23184) CompileException Assignment conversion not possible from type "int" to type "short"

2021-06-29 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23184:
---
Description: 
CREATE TABLE MySink (
  `a` SMALLINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath'
)

CREATE TABLE database8_t0 (
  `c0` SMALLINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath11'
)

CREATE TABLE database8_t1 (
  `c0` SMALLINT,
  `c1` TINYINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath22'
)

INSERT OVERWRITE database8_t0(c0) VALUES(cast(22424 as SMALLINT))
INSERT OVERWRITE database8_t1(c0, c1) VALUES(cast(-17443 as SMALLINT), cast(97 
as TINYINT))
insert into MySink
SELECT database8_t0.c0 AS ref0 FROM database8_t0, database8_t1 WHERE CAST ((- 
(database8_t0.c0)) AS BOOLEAN)


After running that , you will get the errors:
2021-06-29 19:39:27
org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:207)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:188)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:677)
at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:440)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: Could not instantiate generated class 
'BatchExecCalc$4536'
at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:66)
at 
org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:43)
at 
org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:80)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperator(OperatorChain.java:626)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:600)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:540)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:171)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:547)
at 

[jira] [Created] (FLINK-23184) CompileException Assignment conversion not possible from type "int" to type "short"

2021-06-29 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23184:
--

 Summary: CompileException Assignment conversion not possible from 
type "int" to type "short"
 Key: FLINK-23184
 URL: https://issues.apache.org/jira/browse/FLINK-23184
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.14.0
Reporter: xiaojin.wy


CREATE TABLE MySink (
  `a` SMALLINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath'
)

CREATE TABLE database8_t0 (
  `c0` SMALLINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath11'
)

CREATE TABLE database8_t1 (
  `c0` SMALLINT,
  `c1` TINYINT
) WITH (
  'connector' = 'filesystem',
  'format' = 'testcsv',
  'path' = '$resultPath22'
)

INSERT OVERWRITE database8_t0(c0) VALUES(cast(22424 as SMALLINT))
INSERT OVERWRITE database8_t1(c0, c1) VALUES(cast(-17443 as SMALLINT), cast(97 
as TINYINT))
insert into MySink
SELECT database8_t0.c0 AS ref0 FROM database8_t0, database8_t1 WHERE CAST ((- 
(database8_t0.c0)) AS BOOLEAN)


Excuting that , you will get the errors:
2021-06-29 19:39:27
org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:207)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:188)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:677)
at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:79)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:440)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:305)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException: Could not instantiate generated class 
'BatchExecCalc$4536'
at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:66)
at 
org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.createStreamOperator(CodeGenOperatorFactory.java:43)
at 
org.apache.flink.streaming.api.operators.StreamOperatorFactoryUtil.createOperator(StreamOperatorFactoryUtil.java:80)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperator(OperatorChain.java:626)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.createOperatorChain(OperatorChain.java:600)
at 

[jira] [Updated] (FLINK-23077) Running nexmark q5 with 1.13.1 of pipeline.object-reuse=true, the taskmanager will be killed and produce failover.

2021-06-21 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-23077:
---
Description: 
Running nexmark with flink 1.13.0, 1.13.1, q5 can`t success.
*The conf is: *
pipeline.object-reuse=true
*The sql is:*
CREATE TABLE discard_sink (
  auction  BIGINT,
  num  BIGINT
) WITH (
  'connector' = 'blackhole'
);

INSERT INTO discard_sink
SELECT AuctionBids.auction, AuctionBids.num
 FROM (
   SELECT
 B1.auction,
 count(*) AS num,
 HOP_START(B1.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS 
starttime,
 HOP_END(B1.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS endtime
   FROM bid B1
   GROUP BY
 B1.auction,
 HOP(B1.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND)
 ) AS AuctionBids
 JOIN (
   SELECT
 max(CountBids.num) AS maxn,
 CountBids.starttime,
 CountBids.endtime
   FROM (
 SELECT
   count(*) AS num,
   HOP_START(B2.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS 
starttime,
   HOP_END(B2.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS 
endtime
 FROM bid B2
 GROUP BY
   B2.auction,
   HOP(B2.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND)
 ) AS CountBids
   GROUP BY CountBids.starttime, CountBids.endtime
 ) AS MaxBids
 ON AuctionBids.starttime = MaxBids.starttime AND
AuctionBids.endtime = MaxBids.endtime AND
AuctionBids.num >= MaxBids.maxn;%


*The error is:*
2021-06-21 15:00:19,992 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Job 
insert-into_default_catalog.default_database.discard_sink 
(676beebce60930ac033522b4367806b0) switched from state FAILING to FAILED.
org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:138)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:82)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:207)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:197)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:188)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:677)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.UpdateSchedulerNgOnInternalFailuresListener.notifyTaskFailure(UpdateSchedulerNgOnInternalFailuresListener.java:51)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.notifySchedulerNgAboutInternalTaskFailure(DefaultExecutionGraph.java:1462)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1140)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.executiongraph.Execution.processFail(Execution.java:1080)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.executiongraph.Execution.markFailed(Execution.java:911)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.executiongraph.ExecutionVertex.markFailed(ExecutionVertex.java:472)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.DefaultExecutionVertexOperations.markFailed(DefaultExecutionVertexOperations.java:41)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskDeploymentFailure(DefaultScheduler.java:498)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.lambda$assignResourceOrHandleError$7(DefaultScheduler.java:483)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822) 
~[?:1.8.0_102]
at 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
 ~[?:1.8.0_102]
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) 
~[?:1.8.0_102]
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
 ~[?:1.8.0_102]
at 
org.apache.flink.runtime.jobmaster.slotpool.DeclarativeSlotPoolBridge$PendingRequest.failRequest(DeclarativeSlotPoolBridge.java:532)
 ~[flink-dist_2.11-1.13.1.jar:1.13.1]
at 

[jira] [Created] (FLINK-23077) Running nexmark q5 with 1.13.1 of pipeline.object-reuse=true, the taskmanager will be killed and produce failover.

2021-06-21 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23077:
--

 Summary: Running nexmark q5 with 1.13.1 of 
pipeline.object-reuse=true, the taskmanager will be killed and produce failover.
 Key: FLINK-23077
 URL: https://issues.apache.org/jira/browse/FLINK-23077
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.13.1, 1.13.0
Reporter: xiaojin.wy


Running nexmark with flink 1.13.0, 1.13.1, q5 can`t success.
*The conf is: *
pipeline.object-reuse=true
*The sql is:*
CREATE TABLE discard_sink (
  auction  BIGINT,
  num  BIGINT
) WITH (
  'connector' = 'blackhole'
);

INSERT INTO discard_sink
SELECT AuctionBids.auction, AuctionBids.num
 FROM (
   SELECT
 B1.auction,
 count(*) AS num,
 HOP_START(B1.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS 
starttime,
 HOP_END(B1.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS endtime
   FROM bid B1
   GROUP BY
 B1.auction,
 HOP(B1.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND)
 ) AS AuctionBids
 JOIN (
   SELECT
 max(CountBids.num) AS maxn,
 CountBids.starttime,
 CountBids.endtime
   FROM (
 SELECT
   count(*) AS num,
   HOP_START(B2.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS 
starttime,
   HOP_END(B2.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND) AS 
endtime
 FROM bid B2
 GROUP BY
   B2.auction,
   HOP(B2.dateTime, INTERVAL '2' SECOND, INTERVAL '10' SECOND)
 ) AS CountBids
   GROUP BY CountBids.starttime, CountBids.endtime
 ) AS MaxBids
 ON AuctionBids.starttime = MaxBids.starttime AND
AuctionBids.endtime = MaxBids.endtime AND
AuctionBids.num >= MaxBids.maxn;%


*The error is:*
 !image-2021-06-22-11-30-58-022.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of

2020-06-30 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148604#comment-17148604
 ] 

xiaojin.wy commented on FLINK-18364:


mark [~TsReaper]

> A streaming sql cause "org.apache.flink.table.api.ValidationException: Type 
> TIMESTAMP(6) of table field 'rowtime' does not match with the physical type 
> TIMESTAMP(3) of the 'rowtime' field of the TableSink consumed type"
> ---
>
> Key: FLINK-18364
> URL: https://issues.apache.org/jira/browse/FLINK-18364
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: The input data is:
> 2015-02-15 10:15:00.0|1|paint|10
> 2015-02-15 10:24:15.0|2|paper|5
> 2015-02-15 10:24:45.0|3|brush|12
> 2015-02-15 10:58:00.0|4|paint|3
> 2015-02-15 11:10:00.0|5|paint|3
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> *The whole error is:*
>  Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
> of table field 'rowtime' does not match with the physical type TIMESTAMP(3) 
> of the 'rowtime' field of the TableSink consumed type. at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
>  at 
> org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
>  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
>  at scala.Option.map(Option.scala:146) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
>  at 
> org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
>  at 
> com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
>  at 
> com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
>  at 
> com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:219)
>  ... 48 more
> I run the sql by sql-gateway.
>  When I run it in a batch environment, the sql run 

[jira] [Updated] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-18371:
---
Environment: 
I use the sql-gateway to run this sql.

The environment is streaming.
*The sql is:*
CREATE TABLE `src` (
key bigint,
v varchar
) WITH (
'connector'='filesystem',
'csv.field-delimiter'='|',

'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
'csv.null-literal'='',
'format'='csv'
)

select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1

*The input data is:*
238|val_238
86|val_86
311|val_311
27|val_27
165|val_165
409|val_409
255|val_255
278|val_278
98|val_98
484|val_484
265|val_265
193|val_193
401|val_401
150|val_150
273|val_273
224|val_224
369|val_369
66|val_66
128|val_128
213|val_213
146|val_146
406|val_406
429|val_429
374|val_374
152|val_152
469|val_469
145|val_145
495|val_495
37|val_37
327|val_327
281|val_281
277|val_277
209|val_209
15|val_15
82|val_82
403|val_403
166|val_166
417|val_417
430|val_430
252|val_252
292|val_292
219|val_219
287|val_287
153|val_153
193|val_193
338|val_338
446|val_446
459|val_459
394|val_394
237|val_237
482|val_482
174|val_174
413|val_413
494|val_494
207|val_207
199|val_199
466|val_466
208|val_208
174|val_174
399|val_399
396|val_396
247|val_247
417|val_417
489|val_489
162|val_162
377|val_377
397|val_397
309|val_309
365|val_365
266|val_266
439|val_439
342|val_342
367|val_367
325|val_325
167|val_167
195|val_195
475|val_475
17|val_17
113|val_113
155|val_155
203|val_203
339|val_339
0|val_0
455|val_455
128|val_128
311|val_311
316|val_316
57|val_57
302|val_302
205|val_205
149|val_149
438|val_438
345|val_345
129|val_129
170|val_170
20|val_20
489|val_489
157|val_157
378|val_378
221|val_221
92|val_92
111|val_111
47|val_47
72|val_72
4|val_4
280|val_280
35|val_35
427|val_427
277|val_277
208|val_208
356|val_356
399|val_399
169|val_169
382|val_382
498|val_498
125|val_125
386|val_386
437|val_437
469|val_469
192|val_192
286|val_286
187|val_187
176|val_176
54|val_54
459|val_459
51|val_51
138|val_138
103|val_103
239|val_239
213|val_213
216|val_216
430|val_430
278|val_278
176|val_176
289|val_289
221|val_221
65|val_65
318|val_318
332|val_332
311|val_311
275|val_275
137|val_137
241|val_241
83|val_83
333|val_333
180|val_180
284|val_284
12|val_12
230|val_230
181|val_181
67|val_67
260|val_260
404|val_404
384|val_384
489|val_489
353|val_353
373|val_373
272|val_272
138|val_138
217|val_217
84|val_84
348|val_348
466|val_466
58|val_58
8|val_8
411|val_411
230|val_230
208|val_208
348|val_348
24|val_24
463|val_463
431|val_431
179|val_179
172|val_172
42|val_42
129|val_129
158|val_158
119|val_119
496|val_496
0|val_0
322|val_322
197|val_197
468|val_468
393|val_393
454|val_454
100|val_100
298|val_298
199|val_199
191|val_191
418|val_418
96|val_96
26|val_26
165|val_165
327|val_327
230|val_230
205|val_205
120|val_120
131|val_131
51|val_51
404|val_404
43|val_43
436|val_436
156|val_156
469|val_469
468|val_468
308|val_308
95|val_95
196|val_196
288|val_288
481|val_481
457|val_457
98|val_98
282|val_282
197|val_197
187|val_187
318|val_318
318|val_318
409|val_409
470|val_470
137|val_137
369|val_369
316|val_316
169|val_169
413|val_413
85|val_85
77|val_77
0|val_0
490|val_490
87|val_87
364|val_364
179|val_179
118|val_118
134|val_134
395|val_395
282|val_282
138|val_138
238|val_238
419|val_419
15|val_15
118|val_118
72|val_72
90|val_90
307|val_307
19|val_19
435|val_435
10|val_10
277|val_277
273|val_273
306|val_306
224|val_224
309|val_309
389|val_389
327|val_327
242|val_242
369|val_369
392|val_392
272|val_272
331|val_331
401|val_401
242|val_242
452|val_452
177|val_177
226|val_226
5|val_5
497|val_497
402|val_402
396|val_396
317|val_317
395|val_395
58|val_58
35|val_35
336|val_336
95|val_95
11|val_11
168|val_168
34|val_34
229|val_229
233|val_233
143|val_143
472|val_472
322|val_322
498|val_498
160|val_160
195|val_195
42|val_42
321|val_321
430|val_430
119|val_119
489|val_489
458|val_458
78|val_78
76|val_76
41|val_41
223|val_223
492|val_492
149|val_149
449|val_449
218|val_218
228|val_228
138|val_138
453|val_453
30|val_30
209|val_209
64|val_64
468|val_468
76|val_76
74|val_74
342|val_342
69|val_69
230|val_230
33|val_33
368|val_368
103|val_103
296|val_296
113|val_113
216|val_216
367|val_367
344|val_344
167|val_167
274|val_274
219|val_219
239|val_239
485|val_485
116|val_116
223|val_223
256|val_256
263|val_263
70|val_70
487|val_487
480|val_480
401|val_401
288|val_288
191|val_191
5|val_5
244|val_244
438|val_438
128|val_128
467|val_467
432|val_432
202|val_202
316|val_316
229|val_229
469|val_469
463|val_463
280|val_280
2|val_2
35|val_35
283|val_283
331|val_331
235|val_235
80|val_80
44|val_44
193|val_193
321|val_321
335|val_335
104|val_104
466|val_466
366|val_366

[jira] [Updated] (FLINK-15397) Streaming and batch has different value in the case of count function

2020-06-22 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15397:
---
Fix Version/s: 1.11.0
Affects Version/s: 1.11.0

> Streaming and batch has different value in the case of count function
> -
>
> Key: FLINK-15397
> URL: https://issues.apache.org/jira/browse/FLINK-15397
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0, 1.11.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> *The sql is:*
> CREATE TABLE `testdata` (
>   a INT,
>   b INT
> ) WITH (
>   
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> SELECT COUNT(1) FROM testdata WHERE false;
> If the configuration's type is batch ,the result will be 0, but if the 
> configuration is streaming, there will be no value;
> *The configuration is:*
> execution:
>   planner: blink
>   type: streaming
> *The input data is:*
> {code:java}
> 1|1
> 1|2
> 2|1
> 2|2
> 3|1
> 3|2
> |1
> 3|
> |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18411) release-1.11 compile failed

2020-06-22 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-18411:
--

 Summary: release-1.11 compile failed
 Key: FLINK-18411
 URL: https://issues.apache.org/jira/browse/FLINK-18411
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.11.0
 Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
skip=true"
Reporter: xiaojin.wy
 Fix For: 1.11.0
 Attachments: image-2020-06-23-10-00-45-519.png

 !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18401) A streaming sql has "Sort on a non-time-attribute field is not supported"

2020-06-21 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-18401:
--

 Summary: A streaming sql has "Sort on a non-time-attribute field 
is not supported"
 Key: FLINK-18401
 URL: https://issues.apache.org/jira/browse/FLINK-18401
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
 Environment: I run the sql using sql -gateway.
*The sql is:*
CREATE TABLE `int8_tbl` (
q1 bigint, q2 bigint
) WITH (
'connector'='filesystem',
'csv.field-delimiter'='|',

'path'='/defender_test_data/daily_regression_test_stream_postgres_1.10/tests.postgres.cases.test_join/sources/int8_tbl.csv',
'csv.null-literal'='',
'format'='csv'
)
select * from int8_tbl i1 left join (select * from int8_tbl i2 join (select 123 
as x) ss on i2.q1 = x) as i3 on i1.q2 = i3.q2
order by 1, 2;
*The input data is:*

123|456
123|4567890123456789
4567890123456789|123
4567890123456789|4567890123456789
4567890123456789|-4567890123456789
Reporter: xiaojin.wy
 Fix For: 1.11.0


Caused by: org.apache.flink.table.api.TableException: Sort on a 
non-time-attribute field is not supported.
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSort.translateToPlanInternal(StreamExecSort.scala:106)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSort.translateToPlanInternal(StreamExecSort.scala:59)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSort.translateToPlan(StreamExecSort.scala:59)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToTransformation(StreamExecLegacySink.scala:158)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:82)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlanInternal(StreamExecLegacySink.scala:48)
at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecLegacySink.translateToPlan(StreamExecLegacySink.scala:48)
at 
org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:67)
at 
org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:66)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:66)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:166)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
at 
org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
at 
com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
at 
com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
at 
com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
at 
com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:219)
... 48 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18400) A streaming sql has "java.lang.NegativeArraySizeException"

2020-06-21 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-18400:
--

 Summary: A streaming sql has "java.lang.NegativeArraySizeException"
 Key: FLINK-18400
 URL: https://issues.apache.org/jira/browse/FLINK-18400
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
 Environment: I use sql-gateway to run the sql.
*The sql is:*
SELECT x, round (x ) AS int8_value FROM (VALUES CAST (-2.5 AS DECIMAL(6,1)), 
CAST(-1.5 AS DECIMAL(6,1)), CAST(-0.5 AS DECIMAL(6,1)), CAST(0.0 AS 
DECIMAL(6,1)), CAST(0.5 AS DECIMAL(6,1)), CAST(1.5 AS DECIMAL(6,1)), CAST(2.5 
AS DECIMAL(6,1))) t(x);

The environment is streaming.
Reporter: xiaojin.wy
 Fix For: 1.11.0


2020-06-22 08:07:31
org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.NegativeArraySizeException
at 
org.apache.flink.table.data.binary.BinarySegmentUtils.readDecimalData(BinarySegmentUtils.java:1031)
at 
org.apache.flink.table.data.binary.BinaryRowData.getDecimal(BinaryRowData.java:341)
at 
org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:685)
at 
org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:661)
at 
org.apache.flink.table.data.util.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:401)
at 
org.apache.flink.table.data.util.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1425)
at 
org.apache.flink.table.data.util.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1404)
at 
org.apache.flink.table.data.util.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:383)
at SinkConversion$1162.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:161)
at 

[jira] [Created] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-18 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-18371:
--

 Summary: NPE of 
"org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
 Key: FLINK-18371
 URL: https://issues.apache.org/jira/browse/FLINK-18371
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
 Environment: I use the sql-gateway to run this sql.
*The sql is:*
CREATE TABLE `src` (
key bigint,
v varchar
) WITH (
'connector'='filesystem',
'csv.field-delimiter'='|',

'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
'csv.null-literal'='',
'format'='csv'
)

select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1

*The input data is:*
238|val_238
86|val_86
311|val_311
27|val_27
165|val_165
409|val_409
255|val_255
278|val_278
98|val_98
484|val_484
265|val_265
193|val_193
401|val_401
150|val_150
273|val_273
224|val_224
369|val_369
66|val_66
128|val_128
213|val_213
146|val_146
406|val_406
429|val_429
374|val_374
152|val_152
469|val_469
145|val_145
495|val_495
37|val_37
327|val_327
281|val_281
277|val_277
209|val_209
15|val_15
82|val_82
403|val_403
166|val_166
417|val_417
430|val_430
252|val_252
292|val_292
219|val_219
287|val_287
153|val_153
193|val_193
338|val_338
446|val_446
459|val_459
394|val_394
237|val_237
482|val_482
174|val_174
413|val_413
494|val_494
207|val_207
199|val_199
466|val_466
208|val_208
174|val_174
399|val_399
396|val_396
247|val_247
417|val_417
489|val_489
162|val_162
377|val_377
397|val_397
309|val_309
365|val_365
266|val_266
439|val_439
342|val_342
367|val_367
325|val_325
167|val_167
195|val_195
475|val_475
17|val_17
113|val_113
155|val_155
203|val_203
339|val_339
0|val_0
455|val_455
128|val_128
311|val_311
316|val_316
57|val_57
302|val_302
205|val_205
149|val_149
438|val_438
345|val_345
129|val_129
170|val_170
20|val_20
489|val_489
157|val_157
378|val_378
221|val_221
92|val_92
111|val_111
47|val_47
72|val_72
4|val_4
280|val_280
35|val_35
427|val_427
277|val_277
208|val_208
356|val_356
399|val_399
169|val_169
382|val_382
498|val_498
125|val_125
386|val_386
437|val_437
469|val_469
192|val_192
286|val_286
187|val_187
176|val_176
54|val_54
459|val_459
51|val_51
138|val_138
103|val_103
239|val_239
213|val_213
216|val_216
430|val_430
278|val_278
176|val_176
289|val_289
221|val_221
65|val_65
318|val_318
332|val_332
311|val_311
275|val_275
137|val_137
241|val_241
83|val_83
333|val_333
180|val_180
284|val_284
12|val_12
230|val_230
181|val_181
67|val_67
260|val_260
404|val_404
384|val_384
489|val_489
353|val_353
373|val_373
272|val_272
138|val_138
217|val_217
84|val_84
348|val_348
466|val_466
58|val_58
8|val_8
411|val_411
230|val_230
208|val_208
348|val_348
24|val_24
463|val_463
431|val_431
179|val_179
172|val_172
42|val_42
129|val_129
158|val_158
119|val_119
496|val_496
0|val_0
322|val_322
197|val_197
468|val_468
393|val_393
454|val_454
100|val_100
298|val_298
199|val_199
191|val_191
418|val_418
96|val_96
26|val_26
165|val_165
327|val_327
230|val_230
205|val_205
120|val_120
131|val_131
51|val_51
404|val_404
43|val_43
436|val_436
156|val_156
469|val_469
468|val_468
308|val_308
95|val_95
196|val_196
288|val_288
481|val_481
457|val_457
98|val_98
282|val_282
197|val_197
187|val_187
318|val_318
318|val_318
409|val_409
470|val_470
137|val_137
369|val_369
316|val_316
169|val_169
413|val_413
85|val_85
77|val_77
0|val_0
490|val_490
87|val_87
364|val_364
179|val_179
118|val_118
134|val_134
395|val_395
282|val_282
138|val_138
238|val_238
419|val_419
15|val_15
118|val_118
72|val_72
90|val_90
307|val_307
19|val_19
435|val_435
10|val_10
277|val_277
273|val_273
306|val_306
224|val_224
309|val_309
389|val_389
327|val_327
242|val_242
369|val_369
392|val_392
272|val_272
331|val_331
401|val_401
242|val_242
452|val_452
177|val_177
226|val_226
5|val_5
497|val_497
402|val_402
396|val_396
317|val_317
395|val_395
58|val_58
35|val_35
336|val_336
95|val_95
11|val_11
168|val_168
34|val_34
229|val_229
233|val_233
143|val_143
472|val_472
322|val_322
498|val_498
160|val_160
195|val_195
42|val_42
321|val_321
430|val_430
119|val_119
489|val_489
458|val_458
78|val_78
76|val_76
41|val_41
223|val_223
492|val_492
149|val_149
449|val_449
218|val_218
228|val_228
138|val_138
453|val_453
30|val_30
209|val_209
64|val_64
468|val_468
76|val_76
74|val_74
342|val_342
69|val_69
230|val_230
33|val_33
368|val_368
103|val_103
296|val_296
113|val_113
216|val_216
367|val_367
344|val_344
167|val_167
274|val_274
219|val_219
239|val_239
485|val_485
116|val_116
223|val_223
256|val_256
263|val_263
70|val_70
487|val_487
480|val_480
401|val_401
288|val_288
191|val_191
5|val_5
244|val_244
438|val_438
128|val_128
467|val_467
432|val_432

[jira] [Commented] (FLINK-18365) The same sql in a batch env and a streaming env has different value.

2020-06-18 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140133#comment-17140133
 ] 

xiaojin.wy commented on FLINK-18365:


[~jark] This two issues are different,  FLINK-15397 return nothing, but this 
one return None.
The FLINK-15397 also appear in other cases when testing the link-1.11.

> The same sql in a batch env and a streaming env has different value.
> 
>
> Key: FLINK-18365
> URL: https://issues.apache.org/jira/browse/FLINK-18365
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> I use the sql-gateway to run this sql.
> *The input table is:*
>  CREATE TABLE `scott_dept` (
>   deptno INT,
>   dname VARCHAR,
>   loc VARCHAR
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
>   'format.type'='csv'
> )
> *The input data is:*
> 10|ACCOUNTING|NEW YORK
> 20|RESEARCH|DALLAS
> 30|SALES|CHICAGO
> 40|OPERATIONS|BOSTON
> *The sql is :*
> select deptno, (select count(*) from scott_emp where 1 = 0) as x from 
> scott_dept
> *The error:*
> In a batch environment, the result value is:10|0\n20|0\n30|0\n40|0
> In a streaming environment, the result value 
> is:10|None\n20|None\n30|None\n40|None



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18365) The same sql in a batch env and a streaming env has different value.

2020-06-18 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140133#comment-17140133
 ] 

xiaojin.wy edited comment on FLINK-18365 at 6/19/20, 2:37 AM:
--

[~jark] This two issues are different,  FLINK-15397 return nothing, but this 
one return None.
The FLINK-15397 also appear in other cases when testing the flink-1.11.


was (Author: xiaojin.wy):
[~jark] This two issues are different,  FLINK-15397 return nothing, but this 
one return None.
The FLINK-15397 also appear in other cases when testing the link-1.11.

> The same sql in a batch env and a streaming env has different value.
> 
>
> Key: FLINK-18365
> URL: https://issues.apache.org/jira/browse/FLINK-18365
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> I use the sql-gateway to run this sql.
> *The input table is:*
>  CREATE TABLE `scott_dept` (
>   deptno INT,
>   dname VARCHAR,
>   loc VARCHAR
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
>   'format.type'='csv'
> )
> *The input data is:*
> 10|ACCOUNTING|NEW YORK
> 20|RESEARCH|DALLAS
> 30|SALES|CHICAGO
> 40|OPERATIONS|BOSTON
> *The sql is :*
> select deptno, (select count(*) from scott_emp where 1 = 0) as x from 
> scott_dept
> *The error:*
> In a batch environment, the result value is:10|0\n20|0\n30|0\n40|0
> In a streaming environment, the result value 
> is:10|None\n20|None\n30|None\n40|None



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(

2020-06-18 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140129#comment-17140129
 ] 

xiaojin.wy edited comment on FLINK-18364 at 6/19/20, 2:33 AM:
--

The error above appear again after the sql change to this one 

CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'connector'='filesystem',
'csv.field-delimiter'='|',

'path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
'csv.null-literal'='',
'format'='csv'
)

[~jark]


was (Author: xiaojin.wy):
The error above appear again after the sql change to this one 

CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'connector'='filesystem',
'csv.field-delimiter'='|',

'path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
'csv.null-literal'='',
'format'='csv'
)

> A streaming sql cause "org.apache.flink.table.api.ValidationException: Type 
> TIMESTAMP(6) of table field 'rowtime' does not match with the physical type 
> TIMESTAMP(3) of the 'rowtime' field of the TableSink consumed type"
> ---
>
> Key: FLINK-18364
> URL: https://issues.apache.org/jira/browse/FLINK-18364
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: The input data is:
> 2015-02-15 10:15:00.0|1|paint|10
> 2015-02-15 10:24:15.0|2|paper|5
> 2015-02-15 10:24:45.0|3|brush|12
> 2015-02-15 10:58:00.0|4|paint|3
> 2015-02-15 11:10:00.0|5|paint|3
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> *The whole error is:*
>  Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
> of table field 'rowtime' does not match with the physical type TIMESTAMP(3) 
> of the 'rowtime' field of the TableSink consumed type. at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
>  at 
> org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
>  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
>  at scala.Option.map(Option.scala:146) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
>  at 
> 

[jira] [Updated] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of t

2020-06-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-18364:
---
Description: 
*The whole error is:*
 Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of 
the 'rowtime' field of the TableSink consumed type. at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
 at 
org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
 at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
 at 
org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
 at 
com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:219)
 ... 48 more

I run the sql by sql-gateway.
 When I run it in a batch environment, the sql run well and can produce the 
result of "2015-02-15T10:00:00|4\n2015-02-15T11:00:00|1". But change to the 
streaming environment, the errors come.

*The sql is:*

select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)

*The table query is:*

  CREATE TABLE `orders` (
 rowtime TIMESTAMP,
 id INT,
 product VARCHAR,
 units INT
 ) WITH (
 'format.field-delimiter'='|',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
 'format.type'='csv'
 )

or  query is:

 
CREATE TABLE `orders` (
rowtime TIMESTAMP,
id INT,
product VARCHAR,
units INT
) WITH (
'connector'='filesystem',
'csv.field-delimiter'='|',
'path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
'csv.null-literal'='',
'format'='csv'
)
 

  was:
*The whole error is:*
 Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of 
the 'rowtime' field of the TableSink consumed type. at 

[jira] [Commented] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of

2020-06-18 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140129#comment-17140129
 ] 

xiaojin.wy commented on FLINK-18364:


The error above appear again after the sql change to this one 

CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'connector'='filesystem',
'csv.field-delimiter'='|',

'path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
'csv.null-literal'='',
'format'='csv'
)

> A streaming sql cause "org.apache.flink.table.api.ValidationException: Type 
> TIMESTAMP(6) of table field 'rowtime' does not match with the physical type 
> TIMESTAMP(3) of the 'rowtime' field of the TableSink consumed type"
> ---
>
> Key: FLINK-18364
> URL: https://issues.apache.org/jira/browse/FLINK-18364
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: The input data is:
> 2015-02-15 10:15:00.0|1|paint|10
> 2015-02-15 10:24:15.0|2|paper|5
> 2015-02-15 10:24:45.0|3|brush|12
> 2015-02-15 10:58:00.0|4|paint|3
> 2015-02-15 11:10:00.0|5|paint|3
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> *The whole error is:*
>  Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
> of table field 'rowtime' does not match with the physical type TIMESTAMP(3) 
> of the 'rowtime' field of the TableSink consumed type. at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
>  at 
> org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
>  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
>  at scala.Option.map(Option.scala:146) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
>  at 
> org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
>  at 
> com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
>  at 
> com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
>  at 
> 

[jira] [Updated] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of t

2020-06-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-18364:
---
Description: 
*The whole error is:*
 Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of 
the 'rowtime' field of the TableSink consumed type. at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
 at 
org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
 at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
 at 
org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
 at 
com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:219)
 ... 48 more

I run the sql by sql-gateway.
 When I run it in a batch environment, the sql run well and can produce the 
result of "2015-02-15T10:00:00|4\n2015-02-15T11:00:00|1". But change to the 
streaming environment, the errors come.

*The sql is:*

select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)

*The table query is:*

  CREATE TABLE `orders` (
 rowtime TIMESTAMP,
 id INT,
 product VARCHAR,
 units INT
 ) WITH (
 'format.field-delimiter'='|',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
 'format.type'='csv'
 )

or  query is:

 

 

  was:
*The whole error is:*
 Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of 
the 'rowtime' field of the TableSink consumed type. at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
 at 

[jira] [Created] (FLINK-18365) The same sql in a batch env and a streaming env has different value.

2020-06-18 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-18365:
--

 Summary: The same sql in a batch env and a streaming env has 
different value.
 Key: FLINK-18365
 URL: https://issues.apache.org/jira/browse/FLINK-18365
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
 Environment: I use the sql-gateway to run this sql.

*The input table is:*

 CREATE TABLE `scott_dept` (
deptno INT,
dname VARCHAR,
loc VARCHAR
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
'format.type'='csv'
)

*The input data is:*

10|ACCOUNTING|NEW YORK
20|RESEARCH|DALLAS
30|SALES|CHICAGO
40|OPERATIONS|BOSTON

Reporter: xiaojin.wy
 Fix For: 1.11.0


*The sql is :*
select deptno, (select count(*) from scott_emp where 1 = 0) as x from scott_dept

*The error:*
In a batch environment, the result value is:10|0\n20|0\n30|0\n40|0
In a streaming environment, the result value 
is:10|None\n20|None\n30|None\n40|None



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of t

2020-06-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-18364:
---
Description: 
*The whole error is:*
 Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of 
the 'rowtime' field of the TableSink consumed type. at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
 at 
org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
 at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
 at 
org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
 at 
com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:219)
 ... 48 more

I run the sql by sql-gateway.
 When I run it in a batch environment, the sql run well and can produce the 
result of "2015-02-15T10:00:00|4\n2015-02-15T11:00:00|1". But change to the 
streaming environment, the errors come.

*The sql is:*

select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)

*The table query is:*

  CREATE TABLE `orders` (
 rowtime TIMESTAMP,
 id INT,
 product VARCHAR,
 units INT
 ) WITH (
 'format.field-delimiter'='|',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
 'format.type'='csv'
 )

 

 

  was:
*The whole error is:*
Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of 
table field 'rowtime' does not match with the physical type TIMESTAMP(3) of the 
'rowtime' field of the TableSink consumed type. at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
 at 

[jira] [Updated] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of t

2020-06-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-18364:
---
Description: 
*The whole error is:*
Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of 
table field 'rowtime' does not match with the physical type TIMESTAMP(3) of the 
'rowtime' field of the TableSink consumed type. at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
 at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
 at 
org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
 at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
 at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
 at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
 at 
org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
 at 
com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
 at 
com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
 at 
com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:219)
 ... 48 more



I run the sql by sql-gateway.
When I run it in a batch environment, the sql run well and can produce the 
result of "2015-02-15T10:00:00|4\n2015-02-15T11:00:00|1". But change to the 
streaming environment, the errors come.

*The sql is:*

select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)

*The table query is:*

  CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'connector.path'='/daily_regression_stream_blink_sql_1.10/test_agg/sources/orders.csv',
'format.type'='csv'
)

 

 
Environment: 
The input data is:

2015-02-15 10:15:00.0|1|paint|10
2015-02-15 10:24:15.0|2|paper|5
2015-02-15 10:24:45.0|3|brush|12
2015-02-15 10:58:00.0|4|paint|3
2015-02-15 11:10:00.0|5|paint|3

> A streaming sql cause "org.apache.flink.table.api.ValidationException: Type 
> TIMESTAMP(6) of table field 'rowtime' does not match with the physical type 
> TIMESTAMP(3) of the 'rowtime' field of the TableSink consumed type"
> 

[jira] [Created] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of t

2020-06-18 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-18364:
--

 Summary: A streaming sql cause 
"org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table 
field 'rowtime' does not match with the physical type TIMESTAMP(3) of the 
'rowtime' field of the TableSink consumed type"
 Key: FLINK-18364
 URL: https://issues.apache.org/jira/browse/FLINK-18364
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.11.0
Reporter: xiaojin.wy
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15613) execute sql appear "java.lang.IndexOutOfBoundsException"

2020-02-04 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15613:
---
Description: 
*The sql is* :

CREATE TABLE `int8_tbl` (
 q1 bigint, q2 bigint
 ) WITH (
 'connector.path'='/test_join/sources/int8_tbl.csv',
 'format.empty-column-as-null'='true',
 'format.field-delimiter'='|',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 'format.type'='csv'
 );

select * from int8_tbl i1 left join (select * from int8_tbl i2 join (select 123 
as x) ss on i2.q1 = x) as i3 on i1.q2 = i3.q2 order by 1, 2;

 

*The output after exciting the sql is :*
 [ERROR] Could not execute SQL statement. Reason:

 Caused by: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 at 
java.util.ArrayList.rangeCheck(ArrayList.java:653) at 
java.util.ArrayList.get(ArrayList.java:429) at 
org.apache.calcite.sql2rel.SqlToRelConverter$LookupContext.findRel(SqlToRelConverter.java:5300)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.lookup(SqlToRelConverter.java:4424)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.lookupExp(SqlToRelConverter.java:4369)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:3720)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.access$2200(SqlToRelConverter.java:217)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:4796)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.visit(SqlToRelConverter.java:4092)
 at org.apache.calcite.sql.SqlIdentifier.accept(SqlIdentifier.java:317) at 
org.apache.calcite.sql2rel.SqlToRelConverter$Blackboard.convertExpression(SqlToRelConverter.java:4656)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectList(SqlToRelConverter.java:3939)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:670)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:627)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3181)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2124)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2005)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2085)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:646)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:627)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3181)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:563)
 at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:148)
 at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:135)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:522)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:436)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:154)
 at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:464)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$createTable$16(LocalExecutor.java:783)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:231)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.createTable(LocalExecutor.java:783)
 ... 9 more

  was:
*The sql is* :

CREATE TABLE `int8_tbl` (
q1 bigint, q2 bigint
) WITH (
'connector.path'='/test_join/sources/int8_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select * from int8_tbl i1 left join (select * from int8_tbl i2 join (select 123 
as x) ss on i2.q1 = x) as i3 on i1.q2 = i3.q2 order by 1, 2;

 

*The output after exciting the sql is :*
[ERROR] Could not execute SQL statement. Reason:
java.lang.IndexOutOfBoundsException: Index: 1, Size: 1





 


> execute sql appear "java.lang.IndexOutOfBoundsException"
> 
>
> Key: FLINK-15613
> URL: https://issues.apache.org/jira/browse/FLINK-15613
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *The input data is:*
> 0.0
> 1004.30
> -34.84
> 1.2345678901234E200
> 1.2345678901234E-200
> *The sql-client conf is:*

[jira] [Commented] (FLINK-15613) execute sql appear "java.lang.IndexOutOfBoundsException"

2020-02-04 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029650#comment-17029650
 ] 

xiaojin.wy commented on FLINK-15613:


[~ykt836], could you assign someone to fix it?

> execute sql appear "java.lang.IndexOutOfBoundsException"
> 
>
> Key: FLINK-15613
> URL: https://issues.apache.org/jira/browse/FLINK-15613
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *The input data is:*
> 0.0
> 1004.30
> -34.84
> 1.2345678901234E200
> 1.2345678901234E-200
> *The sql-client conf is:*
> execution:
>  planner: blink
>  type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *The sql is* :
> CREATE TABLE `int8_tbl` (
>   q1 bigint, q2 bigint
> ) WITH (
>   'connector.path'='/test_join/sources/int8_tbl.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> select * from int8_tbl i1 left join (select * from int8_tbl i2 join (select 
> 123 as x) ss on i2.q1 = x) as i3 on i1.q2 = i3.q2 order by 1, 2;
>  
> *The output after exciting the sql is :*
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15658:
---
Description: 
*summary:*
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;







  was:
*summary:
*
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;








> The same sql run in a streaming environment producing a Exception, but a 
> batch env can run normally.
> 
>
> Key: FLINK-15658
> URL: https://issues.apache.org/jira/browse/FLINK-15658
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *Input data:
> *
> tenk1 is:
> 4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
> 4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
> 6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
> 6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
> 429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
> 5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
> 1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
> 2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
> 0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
> 2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx
> int4_tbl is:
> 0
> 123456
> -123456
> 2147483647
> -2147483647
> *The sql-client configuration is :
> *
> execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *summary:*
> The same sql can run in a batch environment normally,  but in a streaming 
> environment there will be a exception like this:
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field 

[jira] [Updated] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15658:
---
Environment: 
*Input data:*
tenk1 is:
4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx

int4_tbl is:
0
123456
-123456
2147483647
-2147483647

*The sql-client configuration is :*

execution:
  planner: blink
  type: batch

  was:
*Input data:
*
tenk1 is:
4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx

int4_tbl is:
0
123456
-123456
2147483647
-2147483647

*The sql-client configuration is :
*

execution:
  planner: blink
  type: batch


> The same sql run in a streaming environment producing a Exception, but a 
> batch env can run normally.
> 
>
> Key: FLINK-15658
> URL: https://issues.apache.org/jira/browse/FLINK-15658
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *Input data:*
> tenk1 is:
> 4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
> 4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
> 6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
> 6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
> 429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
> 5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
> 1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
> 2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
> 0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
> 2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx
> int4_tbl is:
> 0
> 123456
> -123456
> 2147483647
> -2147483647
> *The sql-client configuration is :*
> execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *summary:*
> The same sql can run in a batch environment normally,  but in a streaming 
> environment there will be a exception like this:
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field names must be unique. 
> Found duplicates: [f1]
> *The sql is:*
> CREATE TABLE `tenk1` (
>   unique1 int,
>   unique2 int,
>   two int,
>   four int,
>   ten int,
>   twenty int,
>   hundred int,
>   thousand int,
>   twothousand int,
>   fivethous int,
>   tenthous int,
>   odd int,
>   even int,
>   stringu1 varchar,
>   stringu2 varchar,
>   string4 varchar
> ) WITH (
>   
> 'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> CREATE TABLE `int4_tbl` (
>   f1 INT
> ) WITH (
>   
> 'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> select a.f1, b.f1, t.thousand, t.tenthous from
>   tenk1 t,
>   (select sum(f1)+1 as f1 from int4_tbl i4a) a,
>   (select sum(f1) as f1 from int4_tbl i4b) b
> where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15658:
---
Description: 
*summary:
*
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;







  was:
The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;








> The same sql run in a streaming environment producing a Exception, but a 
> batch env can run normally.
> 
>
> Key: FLINK-15658
> URL: https://issues.apache.org/jira/browse/FLINK-15658
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: *Input data:
> *
> tenk1 is:
> 4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
> 4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
> 6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
> 6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
> 429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
> 5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
> 1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
> 2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
> 0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
> 2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx
> int4_tbl is:
> 0
> 123456
> -123456
> 2147483647
> -2147483647
> *The sql-client configuration is :
> *
> execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *summary:
> *
> The same sql can run in a batch environment normally,  but in a streaming 
> environment there will be a exception like this:
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.ValidationException: Field names must 

[jira] [Created] (FLINK-15658) The same sql run in a streaming environment producing a Exception, but a batch env can run normally.

2020-01-18 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15658:
--

 Summary: The same sql run in a streaming environment producing a 
Exception, but a batch env can run normally.
 Key: FLINK-15658
 URL: https://issues.apache.org/jira/browse/FLINK-15658
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
 Environment: *Input data:
*
tenk1 is:
4773|9990|1|1|3|13|73|773|773|4773|4773|146|147|PB|GUOAAA|xx
4093|9991|1|1|3|13|93|93|93|4093|4093|186|187|LB|HUOAAA|xx
6587|9992|1|3|7|7|87|587|587|1587|6587|174|175|JT|IUOAAA|xx
6093|9993|1|1|3|13|93|93|93|1093|6093|186|187|JA|JUOAAA|xx
429|9994|1|1|9|9|29|429|429|429|429|58|59|NQ|KUOAAA|xx
5780|9995|0|0|0|0|80|780|1780|780|5780|160|161|IO|LUOAAA|xx
1783|9996|1|3|3|3|83|783|1783|1783|1783|166|167|PQ|MUOAAA|xx
2992|9997|0|0|2|12|92|992|992|2992|2992|184|185|CL|NUOAAA|xx
0|9998|0|0|0|0|0|0|0|0|0|0|1|AA|OUOAAA|xx
2968||0|0|8|8|68|968|968|2968|2968|136|137|EK|PUOAAA|xx

int4_tbl is:
0
123456
-123456
2147483647
-2147483647

*The sql-client configuration is :
*

execution:
  planner: blink
  type: batch
Reporter: xiaojin.wy
 Fix For: 1.10.0


The same sql can run in a batch environment normally,  but in a streaming 
environment there will be a exception like this:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Field names must be unique. 
Found duplicates: [f1]





*The sql is:*

CREATE TABLE `tenk1` (
unique1 int,
unique2 int,
two int,
four int,
ten int,
twenty int,
hundred int,
thousand int,
twothousand int,
fivethous int,
tenthous int,
odd int,
even int,
stringu1 varchar,
stringu2 varchar,
string4 varchar
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/tenk1.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

CREATE TABLE `int4_tbl` (
f1 INT
) WITH (

'connector.path'='/daily_regression_test_stream_postgres_1.10/test_join/sources/int4_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select a.f1, b.f1, t.thousand, t.tenthous from
  tenk1 t,
  (select sum(f1)+1 as f1 from int4_tbl i4a) a,
  (select sum(f1) as f1 from int4_tbl i4b) b
where b.f1 = t.thousand and a.f1 = b.f1 and (a.f1+b.f1+999) = t.tenthous;









--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15613) execute sql appear "java.lang.IndexOutOfBoundsException"

2020-01-16 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15613:
--

 Summary: execute sql appear "java.lang.IndexOutOfBoundsException"
 Key: FLINK-15613
 URL: https://issues.apache.org/jira/browse/FLINK-15613
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
 Environment: *The input data is:*

0.0
1004.30
-34.84
1.2345678901234E200
1.2345678901234E-200

*The sql-client conf is:*

execution:
 planner: blink
 type: batch
Reporter: xiaojin.wy
 Fix For: 1.10.0


*The sql is* :

CREATE TABLE `int8_tbl` (
q1 bigint, q2 bigint
) WITH (
'connector.path'='/test_join/sources/int8_tbl.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);

select * from int8_tbl i1 left join (select * from int8_tbl i2 join (select 123 
as x) ss on i2.q1 = x) as i3 on i1.q2 = i3.q2 order by 1, 2;

 

*The output after exciting the sql is :*
[ERROR] Could not execute SQL statement. Reason:
java.lang.IndexOutOfBoundsException: Index: 1, Size: 1





 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15565) sql planner Incompatible

2020-01-12 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15565:
---
Description: 
*The sql is:*
 CREATE TABLE `over10k` (
 t tinyint,
 si smallint,
 i int,
 b bigint,
 f float,
 d double,
 bo boolean,
 s varchar,
 ts timestamp,
 deci decimal(4,2),
 bin varchar
 ) WITH (
 
'connector.path'='/daily_regression_batch_hive_1.10/test_window_with_specific_behavior/sources/over10k.csv',
 'format.empty-column-as-null'='true',
 'format.field-delimiter'='|',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 'format.type'='csv'
 );
 select s, rank() over (partition by s order by si), sum(b) over (partition by 
s order by si) from over10k limit 100;

*The data is :*
 109|277|65620|4294967305|97.25|7.80|true|nick quirinius|2013-03-01 
09:11:58.703226|27.72|undecided
 93|263|65725|4294967341|6.06|4.12|false|calvin king|2013-03-01 
09:11:58.703299|32.44|values clariffication
 108|383|65629|4294967510|39.55|47.67|false|jessica zipper|2013-03-01 
09:11:58.703133|74.23|nap time
 89|463|65537|4294967493|64.82|13.79|true|ethan white|2013-03-01 
09:11:58.703243|89.52|nap time
 88|372|65645|4294967358|34.48|11.18|true|quinn thompson|2013-03-01 
09:11:58.703168|84.86|forestry
 123|432|65626|4294967435|2.39|16.49|true|david white|2013-03-01 
09:11:58.703136|61.24|joggying
 57|486|65551|4294967397|36.11|9.88|true|katie xylophone|2013-03-01 
09:11:58.703142|57.10|zync studies
 59|343|65787|4294967312|66.89|6.54|true|mike laertes|2013-03-01 
09:11:58.703209|27.56|xylophone band
 74|267|65671|4294967409|21.14|14.64|true|priscilla miller|2013-03-01 
09:11:58.703197|89.06|undecided
 25|336|65587|4294967336|71.01|14.90|true|tom ichabod|2013-03-01 
09:11:58.703127|74.32|zync studies
 48|346|65712|4294967315|45.01|16.08|true|zach brown|2013-03-01 
09:11:58.703108|21.68|zync studies
 84|385|65776|4294967452|35.80|32.13|false|xavier zipper|2013-03-01 
09:11:58.703311|99.46|education
 58|389|65766|4294967416|95.55|20.62|false|sarah miller|2013-03-01 
09:11:58.703215|70.92|history
 22|403|65565|4294967381|99.65|35.42|false|yuri johnson|2013-03-01 
09:11:58.703154|94.47|geology
 55|428|65733|4294967535|99.54|5.35|false|jessica king|2013-03-01 
09:11:58.703233|30.30|forestry
 117|410|65706|4294967391|50.15|0.21|false|quinn johnson|2013-03-01 
09:11:58.703248|65.99|yard duty
 95|423|65573|4294967378|47.59|17.37|true|alice robinson|2013-03-01 
09:11:58.703133|54.57|linguistics
 87|332|65748|4294967320|19.83|41.67|false|fred ellison|2013-03-01 
09:11:58.703289|79.02|mathematics
 114|263|65674|4294967405|84.44|33.18|true|victor van buren|2013-03-01 
09:11:58.703092|63.74|linguistics
 5|369|65780|4294967488|92.02|38.59|true|zach polk|2013-03-01 
09:11:58.703271|67.29|yard duty
 -3|430|65667|4294967469|65.50|40.46|true|yuri xylophone|2013-03-01 
09:11:58.703258|30.94|american history
 120|264|65769|4294967486|89.97|41.18|false|xavier hernandez|2013-03-01 
09:11:58.703140|66.89|philosophy
 107|317|65634|4294967488|5.68|18.89|false|priscilla ichabod|2013-03-01 
09:11:58.703196|39.42|joggying
 29|386|65723|4294967328|71.48|6.13|false|ulysses ichabod|2013-03-01 
09:11:58.703215|86.65|xylophone band
 22|434|65768|4294967543|44.25|27.56|false|tom polk|2013-03-01 
09:11:58.703306|12.30|kindergarten
 -1|274|65755|4294967300|22.01|35.52|false|oscar king|2013-03-01 
09:11:58.703141|33.35|chemistry
 6|365|65603|4294967522|18.51|5.60|false|gabriella king|2013-03-01 
09:11:58.703104|34.20|geology
 97|414|65757|4294967325|31.82|22.37|false|rachel nixon|2013-03-01 
09:11:58.703127|61.00|nap time
 72|448|65538|4294967524|80.09|7.73|true|luke brown|2013-03-01 
09:11:58.703090|95.81|american history
 51|280|65589|4294967486|57.46|23.35|false|zach xylophone|2013-03-01 
09:11:58.703299|11.54|education
 12|447|65583|4294967389|0.98|29.79|true|yuri polk|2013-03-01 
09:11:58.703305|1.89|wind surfing
 -1|360|65539|4294967464|4.08|39.51|false|oscar davidson|2013-03-01 
09:11:58.703144|59.47|nap time
 0|380|65569|4294967425|0.94|28.93|false|sarah robinson|2013-03-01 
09:11:58.703176|88.81|xylophone band
 66|478|65669|4294967339|23.66|38.34|true|yuri carson|2013-03-01 
09:11:58.703228|64.68|opthamology
 12|322|65771|4294967545|84.87|10.76|false|sarah allen|2013-03-01 
09:11:58.703271|0.79|joggying
 79|308|65563|4294967347|4.06|44.84|false|nick underhill|2013-03-01 
09:11:58.703097|76.53|industrial engineering
 4|382|65719|4294967329|7.26|39.92|true|fred polk|2013-03-01 
09:11:58.703073|73.64|mathematics
 10|448|65675|4294967392|26.20|16.30|true|rachel laertes|2013-03-01 
09:11:58.703200|18.01|xylophone band
 45|281|65685|4294967513|81.33|32.22|true|oscar allen|2013-03-01 
09:11:58.703285|71.38|religion
 57|288|65599|4294967422|90.33|44.25|false|bob young|2013-03-01 
09:11:58.703185|11.16|biology
 77|452|65706|4294967512|22.90|5.35|true|bob van buren|2013-03-01 
09:11:58.703290|14.58|debate
 

[jira] [Created] (FLINK-15565) sql planner Incompatible

2020-01-12 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15565:
--

 Summary: sql planner Incompatible
 Key: FLINK-15565
 URL: https://issues.apache.org/jira/browse/FLINK-15565
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.10.0
Reporter: xiaojin.wy
 Fix For: 1.10.0


*The sql is:
*
CREATE TABLE `over10k` (
t tinyint,
si smallint,
i int,
b bigint,
f float,
d double,
bo boolean,
s varchar,
ts timestamp,
deci decimal(4,2),
bin varchar
) WITH (

'connector.path'='/daily_regression_batch_hive_1.10/test_window_with_specific_behavior/sources/over10k.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
select s, rank() over (partition by s order by si), sum(b) over (partition by s 
order by si) from over10k limit 100;


*The data is :*
109|277|65620|4294967305|97.25|7.80|true|nick quirinius|2013-03-01 
09:11:58.703226|27.72|undecided
93|263|65725|4294967341|6.06|4.12|false|calvin king|2013-03-01 
09:11:58.703299|32.44|values clariffication
108|383|65629|4294967510|39.55|47.67|false|jessica zipper|2013-03-01 
09:11:58.703133|74.23|nap time
89|463|65537|4294967493|64.82|13.79|true|ethan white|2013-03-01 
09:11:58.703243|89.52|nap time
88|372|65645|4294967358|34.48|11.18|true|quinn thompson|2013-03-01 
09:11:58.703168|84.86|forestry
123|432|65626|4294967435|2.39|16.49|true|david white|2013-03-01 
09:11:58.703136|61.24|joggying
57|486|65551|4294967397|36.11|9.88|true|katie xylophone|2013-03-01 
09:11:58.703142|57.10|zync studies
59|343|65787|4294967312|66.89|6.54|true|mike laertes|2013-03-01 
09:11:58.703209|27.56|xylophone band
74|267|65671|4294967409|21.14|14.64|true|priscilla miller|2013-03-01 
09:11:58.703197|89.06|undecided
25|336|65587|4294967336|71.01|14.90|true|tom ichabod|2013-03-01 
09:11:58.703127|74.32|zync studies
48|346|65712|4294967315|45.01|16.08|true|zach brown|2013-03-01 
09:11:58.703108|21.68|zync studies
84|385|65776|4294967452|35.80|32.13|false|xavier zipper|2013-03-01 
09:11:58.703311|99.46|education
58|389|65766|4294967416|95.55|20.62|false|sarah miller|2013-03-01 
09:11:58.703215|70.92|history
22|403|65565|4294967381|99.65|35.42|false|yuri johnson|2013-03-01 
09:11:58.703154|94.47|geology
55|428|65733|4294967535|99.54|5.35|false|jessica king|2013-03-01 
09:11:58.703233|30.30|forestry
117|410|65706|4294967391|50.15|0.21|false|quinn johnson|2013-03-01 
09:11:58.703248|65.99|yard duty
95|423|65573|4294967378|47.59|17.37|true|alice robinson|2013-03-01 
09:11:58.703133|54.57|linguistics
87|332|65748|4294967320|19.83|41.67|false|fred ellison|2013-03-01 
09:11:58.703289|79.02|mathematics
114|263|65674|4294967405|84.44|33.18|true|victor van buren|2013-03-01 
09:11:58.703092|63.74|linguistics
5|369|65780|4294967488|92.02|38.59|true|zach polk|2013-03-01 
09:11:58.703271|67.29|yard duty
-3|430|65667|4294967469|65.50|40.46|true|yuri xylophone|2013-03-01 
09:11:58.703258|30.94|american history
120|264|65769|4294967486|89.97|41.18|false|xavier hernandez|2013-03-01 
09:11:58.703140|66.89|philosophy
107|317|65634|4294967488|5.68|18.89|false|priscilla ichabod|2013-03-01 
09:11:58.703196|39.42|joggying
29|386|65723|4294967328|71.48|6.13|false|ulysses ichabod|2013-03-01 
09:11:58.703215|86.65|xylophone band
22|434|65768|4294967543|44.25|27.56|false|tom polk|2013-03-01 
09:11:58.703306|12.30|kindergarten
-1|274|65755|4294967300|22.01|35.52|false|oscar king|2013-03-01 
09:11:58.703141|33.35|chemistry
6|365|65603|4294967522|18.51|5.60|false|gabriella king|2013-03-01 
09:11:58.703104|34.20|geology
97|414|65757|4294967325|31.82|22.37|false|rachel nixon|2013-03-01 
09:11:58.703127|61.00|nap time
72|448|65538|4294967524|80.09|7.73|true|luke brown|2013-03-01 
09:11:58.703090|95.81|american history
51|280|65589|4294967486|57.46|23.35|false|zach xylophone|2013-03-01 
09:11:58.703299|11.54|education
12|447|65583|4294967389|0.98|29.79|true|yuri polk|2013-03-01 
09:11:58.703305|1.89|wind surfing
-1|360|65539|4294967464|4.08|39.51|false|oscar davidson|2013-03-01 
09:11:58.703144|59.47|nap time
0|380|65569|4294967425|0.94|28.93|false|sarah robinson|2013-03-01 
09:11:58.703176|88.81|xylophone band
66|478|65669|4294967339|23.66|38.34|true|yuri carson|2013-03-01 
09:11:58.703228|64.68|opthamology
12|322|65771|4294967545|84.87|10.76|false|sarah allen|2013-03-01 
09:11:58.703271|0.79|joggying
79|308|65563|4294967347|4.06|44.84|false|nick underhill|2013-03-01 
09:11:58.703097|76.53|industrial engineering
4|382|65719|4294967329|7.26|39.92|true|fred polk|2013-03-01 
09:11:58.703073|73.64|mathematics
10|448|65675|4294967392|26.20|16.30|true|rachel laertes|2013-03-01 
09:11:58.703200|18.01|xylophone band
45|281|65685|4294967513|81.33|32.22|true|oscar 

[jira] [Commented] (FLINK-15437) Start session with property of "-Dtaskmanager.memory.process.size" not work

2019-12-30 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005267#comment-17005267
 ] 

xiaojin.wy commented on FLINK-15437:


[~xintongsong] I delete '-n 20', the same exception appears.

> Start session with property of "-Dtaskmanager.memory.process.size" not work
> ---
>
> Key: FLINK-15437
> URL: https://issues.apache.org/jira/browse/FLINK-15437
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *The environment:*
> Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
> "taskmanager.memory.process.size":
> export HADOOP_CLASSPATH=`hadoop classpath`;export 
> HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
>  export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
> $BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
> daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
> -Dtaskmanager.memory.process.size=1024m
> *After execute the cmd above, there is a exception like this:*
> 2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy   
>   - Connecting to ResourceManager at 
> z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
> 2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - Error while running the Flink session.
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> 
>  The program finished with the following exception:
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> *The flink-conf.yaml is :*
> jobmanager.rpc.address: localhost
> jobmanager.rpc.port: 6123
> jobmanager.heap.size: 1024m
> taskmanager.memory.total-process.size: 1024m
> taskmanager.numberOfTaskSlots: 1
> parallelism.default: 1
> jobmanager.execution.failover-strategy: region



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15437) Start session with property of "-Dtaskmanager.memory.process.size" not work

2019-12-30 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005234#comment-17005234
 ] 

xiaojin.wy commented on FLINK-15437:


[~wuzang]

> Start session with property of "-Dtaskmanager.memory.process.size" not work
> ---
>
> Key: FLINK-15437
> URL: https://issues.apache.org/jira/browse/FLINK-15437
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *The environment:*
> Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
> "taskmanager.memory.process.size":
> export HADOOP_CLASSPATH=`hadoop classpath`;export 
> HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
>  export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
> $BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
> daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
> -Dtaskmanager.memory.process.size=1024m
> *After execute the cmd above, there is a exception like this:*
> 2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy   
>   - Connecting to ResourceManager at 
> z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
> 2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - Error while running the Flink session.
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> 
>  The program finished with the following exception:
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> *The flink-conf.yaml is :*
> jobmanager.rpc.address: localhost
> jobmanager.rpc.port: 6123
> jobmanager.heap.size: 1024m
> taskmanager.memory.total-process.size: 1024m
> taskmanager.numberOfTaskSlots: 1
> parallelism.default: 1
> jobmanager.execution.failover-strategy: region



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15437) Start session with property of "-Dtaskmanager.memory.process.size" not work

2019-12-30 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15437:
---
Description: 
*The environment:*
Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
"taskmanager.memory.process.size":

export HADOOP_CLASSPATH=`hadoop classpath`;export 
HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
 export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
$BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
-Dtaskmanager.memory.process.size=1024m


*After execute the cmd above, there is a exception like this:*
2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy 
- Connecting to ResourceManager at 
z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli 
- Error while running the Flink session.
org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
(taskmanager.memory.managed.size), or Total Flink Memory size 
(taskmanager.memory.flink.size), or Total Process Memory size 
(taskmanager.memory.process.size) need to be configured explicitly.
at 
org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
at 
org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)


 The program finished with the following exception:

org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
(taskmanager.memory.managed.size), or Total Flink Memory size 
(taskmanager.memory.flink.size), or Total Process Memory size 
(taskmanager.memory.process.size) need to be configured explicitly.
at 
org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
at 
org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)

*The flink-conf.yaml is :*
jobmanager.rpc.address: localhost
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.memory.total-process.size: 1024m
taskmanager.numberOfTaskSlots: 1
parallelism.default: 1
jobmanager.execution.failover-strategy: region





  was:
*The environment:*
Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
"taskmanager.memory.process.size":

export HADOOP_CLASSPATH=`hadoop classpath`;export 
HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
 export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
$BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
-Dtaskmanager.memory.process.size=1024m


After execute the cmd above, there is a exception like this:
2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy 
- Connecting to ResourceManager at 
z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli 
- Error while running the Flink session.
org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size 

[jira] [Created] (FLINK-15437) Start session with property of "-Dtaskmanager.memory.process.size" not work

2019-12-30 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15437:
--

 Summary: Start session with property of 
"-Dtaskmanager.memory.process.size" not work
 Key: FLINK-15437
 URL: https://issues.apache.org/jira/browse/FLINK-15437
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.10.0
Reporter: xiaojin.wy
 Fix For: 1.10.0


Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
"taskmanager.memory.process.size":

export HADOOP_CLASSPATH=`hadoop classpath`;export 
HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
 export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
$BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
-Dtaskmanager.memory.process.size=1024m


After execute the cmd above, there is a exception like this:
2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy 
- Connecting to ResourceManager at 
z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli 
- Error while running the Flink session.
org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
(taskmanager.memory.managed.size), or Total Flink Memory size 
(taskmanager.memory.flink.size), or Total Process Memory size 
(taskmanager.memory.process.size) need to be configured explicitly.
at 
org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
at 
org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)


 The program finished with the following exception:

org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
(taskmanager.memory.managed.size), or Total Flink Memory size 
(taskmanager.memory.flink.size), or Total Process Memory size 
(taskmanager.memory.process.size) need to be configured explicitly.
at 
org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
at 
org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)

The flink-conf.yaml is :
jobmanager.rpc.address: localhost
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.memory.total-process.size: 1024m
taskmanager.numberOfTaskSlots: 1
parallelism.default: 1
jobmanager.execution.failover-strategy: region







--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15437) Start session with property of "-Dtaskmanager.memory.process.size" not work

2019-12-30 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15437:
---
Description: 
*The environment:*
Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
"taskmanager.memory.process.size":

export HADOOP_CLASSPATH=`hadoop classpath`;export 
HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
 export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
$BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
-Dtaskmanager.memory.process.size=1024m


After execute the cmd above, there is a exception like this:
2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy 
- Connecting to ResourceManager at 
z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli 
- Error while running the Flink session.
org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
(taskmanager.memory.managed.size), or Total Flink Memory size 
(taskmanager.memory.flink.size), or Total Process Memory size 
(taskmanager.memory.process.size) need to be configured explicitly.
at 
org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
at 
org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)


 The program finished with the following exception:

org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
(taskmanager.memory.managed.size), or Total Flink Memory size 
(taskmanager.memory.flink.size), or Total Process Memory size 
(taskmanager.memory.process.size) need to be configured explicitly.
at 
org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
at 
org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at 
org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)

The flink-conf.yaml is :
jobmanager.rpc.address: localhost
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.memory.total-process.size: 1024m
taskmanager.numberOfTaskSlots: 1
parallelism.default: 1
jobmanager.execution.failover-strategy: region





  was:
Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
"taskmanager.memory.process.size":

export HADOOP_CLASSPATH=`hadoop classpath`;export 
HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
 export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
$BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
-Dtaskmanager.memory.process.size=1024m


After execute the cmd above, there is a exception like this:
2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy 
- Connecting to ResourceManager at 
z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli 
- Error while running the Flink session.
org.apache.flink.configuration.IllegalConfigurationException: Either Task Heap 
Memory size (taskmanager.memory.task.heap.size) and 

[jira] [Updated] (FLINK-15397) Streaming and batch has different value in the case of count function

2019-12-25 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15397:
---
Description: 
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*


{code:java}
1|1
1|2
2|1
2|2
3|1
3|2
|1
3|
|
{code}





  was:
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*


{code:java}
1|1
1|2
2|1
2|2
3|1
3|2
 |1
3|
 |
{code}






> Streaming and batch has different value in the case of count function
> -
>
> Key: FLINK-15397
> URL: https://issues.apache.org/jira/browse/FLINK-15397
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *The sql is:*
> CREATE TABLE `testdata` (
>   a INT,
>   b INT
> ) WITH (
>   
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> SELECT COUNT(1) FROM testdata WHERE false;
> If the configuration's type is batch ,the result will be 0, but if the 
> configuration is streaming, there will be no value;
> *The configuration is:*
> execution:
>   planner: blink
>   type: streaming
> *The input data is:*
> {code:java}
> 1|1
> 1|2
> 2|1
> 2|2
> 3|1
> 3|2
> |1
> 3|
> |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15397) Streaming and batch has different value in the case of count function

2019-12-25 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15397:
---
Description: 
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*

1|1
1|2
2|1
2|2
3|1
3|2
 |1
3|
 |




  was:
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*

1|1
1|2
2|1
2|2
3|1
3|2
|1
3|
|





> Streaming and batch has different value in the case of count function
> -
>
> Key: FLINK-15397
> URL: https://issues.apache.org/jira/browse/FLINK-15397
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *The sql is:*
> CREATE TABLE `testdata` (
>   a INT,
>   b INT
> ) WITH (
>   
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> SELECT COUNT(1) FROM testdata WHERE false;
> If the configuration's type is batch ,the result will be 0, but if the 
> configuration is streaming, there will be no value;
> *The configuration is:*
> execution:
>   planner: blink
>   type: streaming
> *The input data is:*
> 1|1
> 1|2
> 2|1
> 2|2
> 3|1
> 3|2
>  |1
> 3|
>  |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15397) Streaming and batch has different value in the case of count function

2019-12-25 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15397:
---
Description: 
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*


{code:java}
1|1
1|2
2|1
2|2
3|1
3|2
 |1
3|
 |
{code}





  was:
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*

1|1
1|2
2|1
2|2
3|1
3|2
 |1
3|
 |





> Streaming and batch has different value in the case of count function
> -
>
> Key: FLINK-15397
> URL: https://issues.apache.org/jira/browse/FLINK-15397
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *The sql is:*
> CREATE TABLE `testdata` (
>   a INT,
>   b INT
> ) WITH (
>   
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> SELECT COUNT(1) FROM testdata WHERE false;
> If the configuration's type is batch ,the result will be 0, but if the 
> configuration is streaming, there will be no value;
> *The configuration is:*
> execution:
>   planner: blink
>   type: streaming
> *The input data is:*
> {code:java}
> 1|1
> 1|2
> 2|1
> 2|2
> 3|1
> 3|2
>  |1
> 3|
>  |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15397) Streaming and batch has different value in the case of count function

2019-12-25 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15397:
---
Description: 
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*

1|1
1|2
2|1
2|2
3|1
3|2
|1
3|
|




  was:
*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*
1|1
1|2
2|1
2|2
3|1
3|2
|1
3|
|





> Streaming and batch has different value in the case of count function
> -
>
> Key: FLINK-15397
> URL: https://issues.apache.org/jira/browse/FLINK-15397
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> *The sql is:*
> CREATE TABLE `testdata` (
>   a INT,
>   b INT
> ) WITH (
>   
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> SELECT COUNT(1) FROM testdata WHERE false;
> If the configuration's type is batch ,the result will be 0, but if the 
> configuration is streaming, there will be no value;
> *The configuration is:*
> execution:
>   planner: blink
>   type: streaming
> *The input data is:*
> 1|1
> 1|2
> 2|1
> 2|2
> 3|1
> 3|2
> |1
> 3|
> |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15397) Streaming and batch has different value in the case of count function

2019-12-25 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15397:
--

 Summary: Streaming and batch has different value in the case of 
count function
 Key: FLINK-15397
 URL: https://issues.apache.org/jira/browse/FLINK-15397
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy
 Fix For: 1.10.0


*The sql is:*
CREATE TABLE `testdata` (
a INT,
b INT
) WITH (

'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
'format.empty-column-as-null'='true',
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',
'format.type'='csv'
);
SELECT COUNT(1) FROM testdata WHERE false;

If the configuration's type is batch ,the result will be 0, but if the 
configuration is streaming, there will be no value;


*The configuration is:*
execution:
  planner: blink
  type: streaming


*The input data is:*
1|1
1|2
2|1
2|2
3|1
3|2
|1
3|
|






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15346) OldCsvValidator has no property of "emptyColumnAsNull".

2019-12-20 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15346:
--

 Summary: OldCsvValidator has no property of "emptyColumnAsNull".
 Key: FLINK-15346
 URL: https://issues.apache.org/jira/browse/FLINK-15346
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.10.0
Reporter: xiaojin.wy
 Fix For: 1.10.0


The OldCsvValidator class and CsvTableSourceFactoryBase class should add a 
"format.empty-column-as-null" property, so that the users can use it .





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15321) The result of sql(SELECT concat('a', cast(null as varchar), 'c');) is NULL;

2019-12-18 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999718#comment-16999718
 ] 

xiaojin.wy edited comment on FLINK-15321 at 12/19/19 3:50 AM:
--

Ok, I will close it if it's expected.


was (Author: xiaojin.wy):
ok, I will close it if it's expected.

> The result of sql(SELECT concat('a', cast(null as varchar), 'c');) is NULL;
> ---
>
> Key: FLINK-15321
> URL: https://issues.apache.org/jira/browse/FLINK-15321
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> The sql is :
> SELECT concat('a', cast(null as varchar), 'c');
> The result of the sql is NULL after you execute it in a sqlClient 
> environment;But actually the result should be 'ac'.
> The config is:
> execution:
>   planner: blink
>   type: batch



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15321) The result of sql(SELECT concat('a', cast(null as varchar), 'c');) is NULL;

2019-12-18 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999718#comment-16999718
 ] 

xiaojin.wy commented on FLINK-15321:


ok, I will close it if it's expected.

> The result of sql(SELECT concat('a', cast(null as varchar), 'c');) is NULL;
> ---
>
> Key: FLINK-15321
> URL: https://issues.apache.org/jira/browse/FLINK-15321
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> The sql is :
> SELECT concat('a', cast(null as varchar), 'c');
> The result of the sql is NULL after you execute it in a sqlClient 
> environment;But actually the result should be 'ac'.
> The config is:
> execution:
>   planner: blink
>   type: batch



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15321) The result of sql(SELECT concat('a', cast(null as varchar), 'c');) is NULL;

2019-12-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15321:
---
Description: 
The sql is :
SELECT concat('a', cast(null as varchar), 'c');

The result of the sql is NULL after you execute it in a sqlClient 
environment;But actually the result should be 'ac'.

The config is:
execution:
  planner: blink
  type: batch

  was:
The sql is :
SELECT concat('a', cast(null as varchar), 'c');

The result of the sql is NULL after you execute it in a sqlClient 
environment;But actually the result should be 'ac'.


> The result of sql(SELECT concat('a', cast(null as varchar), 'c');) is NULL;
> ---
>
> Key: FLINK-15321
> URL: https://issues.apache.org/jira/browse/FLINK-15321
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> The sql is :
> SELECT concat('a', cast(null as varchar), 'c');
> The result of the sql is NULL after you execute it in a sqlClient 
> environment;But actually the result should be 'ac'.
> The config is:
> execution:
>   planner: blink
>   type: batch



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15321) The result of sql(SELECT concat('a', cast(null as varchar), 'c');) is NULL;

2019-12-18 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15321:
--

 Summary: The result of sql(SELECT concat('a', cast(null as 
varchar), 'c');) is NULL;
 Key: FLINK-15321
 URL: https://issues.apache.org/jira/browse/FLINK-15321
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy


The sql is :
SELECT concat('a', cast(null as varchar), 'c');

The result of the sql is NULL after you execute it in a sqlClient 
environment;But actually the result should be 'ac'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15310) A timestamp result get by a select sql and a csvsink sql is different

2019-12-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15310:
---
Description: 
*The sql is:*
CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
'format.type'='csv'
);
select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)


The result got in a sqlClient environment which use the sql above is like this:
 !image-2019-12-18-16-33-24-510.png! 

But the same sql write directly to a cvs batch sink will get a result like this:
1972-03-06 08:44:36.736|4
1972-03-06 08:44:36.736|1



  was:
*The sql is:*
CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
'format.type'='csv'
);
select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)


The result got in a sqlClient environment which use the sql above:
 !image-2019-12-18-16-33-24-510.png! 

But the same sql write directly to a cvs batch sink will get a result like this:
1972-03-06 08:44:36.736|4
1972-03-06 08:44:36.736|1




> A timestamp result get by a select sql and a csvsink sql is different
> -
>
> Key: FLINK-15310
> URL: https://issues.apache.org/jira/browse/FLINK-15310
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Attachments: image-2019-12-18-16-33-24-510.png
>
>
> *The sql is:*
> CREATE TABLE `orders` (
>   rowtime TIMESTAMP,
>   id  INT,
>   product VARCHAR,
>   units INT
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
>   'format.type'='csv'
> );
> select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
> floor(rowtime to hour)
> The result got in a sqlClient environment which use the sql above is like 
> this:
>  !image-2019-12-18-16-33-24-510.png! 
> But the same sql write directly to a cvs batch sink will get a result like 
> this:
> 1972-03-06 08:44:36.736|4
> 1972-03-06 08:44:36.736|1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15310) A timestamp result get by a select sql and a csvsink sql is different

2019-12-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15310:
---
Description: 
*The sql is:*
CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
'format.type'='csv'
);
select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)


The result got in a sqlClient environment which use the sql above:
 !image-2019-12-18-16-33-24-510.png! 

But the same sql write directly to a cvs batch sink will get a result like this:
1972-03-06 08:44:36.736|4
1972-03-06 08:44:36.736|1



  was:
*The sql is:*
CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
'format.type'='csv'
);
select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)


The result got in a sqlClient environment which use the sql above:
 !image-2019-12-18-16-33-24-510.png! 

But sql same sql directly write to a cvs batch sink will get a result like this:
1972-03-06 08:44:36.736|4
1972-03-06 08:44:36.736|1




> A timestamp result get by a select sql and a csvsink sql is different
> -
>
> Key: FLINK-15310
> URL: https://issues.apache.org/jira/browse/FLINK-15310
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Attachments: image-2019-12-18-16-33-24-510.png
>
>
> *The sql is:*
> CREATE TABLE `orders` (
>   rowtime TIMESTAMP,
>   id  INT,
>   product VARCHAR,
>   units INT
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
>   'format.type'='csv'
> );
> select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
> floor(rowtime to hour)
> The result got in a sqlClient environment which use the sql above:
>  !image-2019-12-18-16-33-24-510.png! 
> But the same sql write directly to a cvs batch sink will get a result like 
> this:
> 1972-03-06 08:44:36.736|4
> 1972-03-06 08:44:36.736|1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15310) A timestamp result get by a select sql and a csvsink sql is different

2019-12-18 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16998938#comment-16998938
 ] 

xiaojin.wy commented on FLINK-15310:


[~danny0405]

> A timestamp result get by a select sql and a csvsink sql is different
> -
>
> Key: FLINK-15310
> URL: https://issues.apache.org/jira/browse/FLINK-15310
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: execution:
>   planner: blink
>   type: batch
>Reporter: xiaojin.wy
>Priority: Major
> Attachments: image-2019-12-18-16-33-24-510.png
>
>
> *The sql is:*
> CREATE TABLE `orders` (
>   rowtime TIMESTAMP,
>   id  INT,
>   product VARCHAR,
>   units INT
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
>   'format.type'='csv'
> );
> select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
> floor(rowtime to hour)
> The result got in a sqlClient environment which use the sql above:
>  !image-2019-12-18-16-33-24-510.png! 
> But sql same sql directly write to a cvs batch sink will get a result like 
> this:
> 1972-03-06 08:44:36.736|4
> 1972-03-06 08:44:36.736|1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15310) A timestamp result get by a select sql and a csvsink sql is different

2019-12-18 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15310:
--

 Summary: A timestamp result get by a select sql and a csvsink sql 
is different
 Key: FLINK-15310
 URL: https://issues.apache.org/jira/browse/FLINK-15310
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
 Environment: execution:
  planner: blink
  type: batch
Reporter: xiaojin.wy
 Attachments: image-2019-12-18-16-33-24-510.png

*The sql is:*
CREATE TABLE `orders` (
rowtime TIMESTAMP,
id  INT,
product VARCHAR,
units INT
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_agg/sources/orders.csv',
'format.type'='csv'
);
select floor(rowtime to hour) as rowtime, count(*) as c from orders group by 
floor(rowtime to hour)


The result got in a sqlClient environment which use the sql above:
 !image-2019-12-18-16-33-24-510.png! 

But sql same sql directly write to a cvs batch sink will get a result like this:
1972-03-06 08:44:36.736|4
1972-03-06 08:44:36.736|1





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15309) Execute sql appear "NumberFormatException: Zero length BigInteger"

2019-12-18 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15309:
---
Description: 
*The sql is:*
CREATE TABLE `src` (
key bigint,
v varchar
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_batch_hive_1.10/test_cast/sources/src.csv',
'format.type'='csv'
);

select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1;

*The result schema get in the code is:*
sinkSchema:root
 |-- c1: DECIMAL(10, 2)
 |-- c2: CHAR(10)
 |-- c3: VARCHAR(10)

*The detail:*
If you user the sql above to execute in a sqlclinet environment, you can get 
the result like this:
 !image-2019-12-18-15-53-24-501.png! 

But if you put the result directly into a cvs sink in the code, there will be a 
exception:

Caused by: java.lang.NumberFormatException: Zero length BigInteger
at java.math.BigInteger.(BigInteger.java:302)
at 
org.apache.flink.table.dataformat.Decimal.fromUnscaledBytes(Decimal.java:214)
at 
org.apache.flink.table.dataformat.Decimal.readDecimalFieldFromSegments(Decimal.java:487)
at 
org.apache.flink.table.dataformat.BinaryRow.getDecimal(BinaryRow.java:334)
at 
org.apache.flink.table.dataformat.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:642)
at 
org.apache.flink.table.dataformat.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:618)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:358)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1370)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1349)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:340)
at SinkConversion$43.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
at 
org.apache.flink.table.runtime.util.StreamRecordCollector.collect(StreamRecordCollector.java:44)
at 
org.apache.flink.table.runtime.operators.sort.SortLimitOperator.endInput(SortLimitOperator.java:98)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.endOperatorInput(OperatorChain.java:265)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.endHeadOperatorInput(OperatorChain.java:249)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:73)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:488)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
at java.lang.Thread.run(Thread.java:834)


*The input data is:*
193|val_193
338|val_338
446|val_446
459|val_459
394|val_394
237|val_237
482|val_482
174|val_174
413|val_413
494|val_494
207|val_207
199|val_199
466|val_466
208|val_208
174|val_174
399|val_399
396|val_396
247|val_247
417|val_417
489|val_489
162|val_162
377|val_377
397|val_397
309|val_309
365|val_365
266|val_266
439|val_439
342|val_342
367|val_367
325|val_325
167|val_167
195|val_195
475|val_475
17|val_17
113|val_113
155|val_155
203|val_203
339|val_339
0|val_0
455|val_455
128|val_128
311|val_311
316|val_316
57|val_57
302|val_302

  was:
*The sql is:*
CREATE TABLE `src` (
key bigint,
v varchar
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',


[jira] [Created] (FLINK-15309) Execute sql appear "NumberFormatException: Zero length BigInteger"

2019-12-17 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15309:
--

 Summary: Execute sql appear "NumberFormatException: Zero length 
BigInteger"
 Key: FLINK-15309
 URL: https://issues.apache.org/jira/browse/FLINK-15309
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy
 Attachments: image-2019-12-18-15-53-24-501.png

*The sql is:*
CREATE TABLE `src` (
key bigint,
v varchar
) WITH (
'format.field-delimiter'='|',
'connector.type'='filesystem',
'format.derive-schema'='true',

'connector.path'='/defender_test_data/daily_regression_batch_hive_1.10/test_cast/sources/src.csv',
'format.type'='csv'
);

select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1;

*The result schema get in the code is:*
sinkSchema:root
 |-- c1: DECIMAL(10, 2)
 |-- c2: CHAR(10)
 |-- c3: VARCHAR(10)

*The detail:*
If you user the sql above to execute in a sqlclinet environment, you can get 
the result like this:
 !image-2019-12-18-15-53-24-501.png! 

But if you change the result directly to a cvs sink in the code, there will be 
a exception:

Caused by: java.lang.NumberFormatException: Zero length BigInteger
at java.math.BigInteger.(BigInteger.java:302)
at 
org.apache.flink.table.dataformat.Decimal.fromUnscaledBytes(Decimal.java:214)
at 
org.apache.flink.table.dataformat.Decimal.readDecimalFieldFromSegments(Decimal.java:487)
at 
org.apache.flink.table.dataformat.BinaryRow.getDecimal(BinaryRow.java:334)
at 
org.apache.flink.table.dataformat.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:642)
at 
org.apache.flink.table.dataformat.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:618)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:358)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1370)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1349)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:340)
at SinkConversion$43.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
at 
org.apache.flink.table.runtime.util.StreamRecordCollector.collect(StreamRecordCollector.java:44)
at 
org.apache.flink.table.runtime.operators.sort.SortLimitOperator.endInput(SortLimitOperator.java:98)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.endOperatorInput(OperatorChain.java:265)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.endHeadOperatorInput(OperatorChain.java:249)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:73)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:488)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
at java.lang.Thread.run(Thread.java:834)


*The input data is:*
193|val_193
338|val_338
446|val_446
459|val_459
394|val_394
237|val_237
482|val_482
174|val_174
413|val_413
494|val_494
207|val_207
199|val_199
466|val_466
208|val_208
174|val_174
399|val_399
396|val_396
247|val_247
417|val_417
489|val_489
162|val_162
377|val_377
397|val_397
309|val_309
365|val_365
266|val_266
439|val_439
342|val_342
367|val_367
325|val_325
167|val_167
195|val_195
475|val_475
17|val_17
113|val_113
155|val_155
203|val_203
339|val_339
0|val_0
455|val_455
128|val_128
311|val_311
316|val_316
57|val_57
302|val_302


[jira] [Commented] (FLINK-15289) Run sql appear error of "Zero-length character strings have no serializable string representation".

2019-12-17 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997991#comment-16997991
 ] 

xiaojin.wy commented on FLINK-15289:


[~lzljs3620320] will repair this issue.

> Run sql appear error of "Zero-length character strings have no serializable 
> string representation".
> ---
>
> Key: FLINK-15289
> URL: https://issues.apache.org/jira/browse/FLINK-15289
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Critical
> Fix For: 1.10.0
>
>
> *The sql is:*
>  CREATE TABLE `INT8_TBL` (
>  q1 BIGINT,
>  q2 BIGINT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_postgres_1.10/test_bigint/sources/INT8_TBL.csv',
>  'format.type'='csv'
>  );
> SELECT '' AS five, q1 AS plus, -q1 AS xm FROM INT8_TBL;
> *The error detail is :*
>  2019-12-17 15:35:07,026 ERROR org.apache.flink.table.client.SqlClient - SQL 
> Client must stop. Unexpected exception. This is a bug. Please consider filing 
> an issue.
>  org.apache.flink.table.api.TableException: Zero-length character strings 
> have no serializable string representation.
>  at 
> org.apache.flink.table.types.logical.CharType.asSerializableString(CharType.java:116)
>  at 
> org.apache.flink.table.descriptors.DescriptorProperties.putTableSchema(DescriptorProperties.java:218)
>  at 
> org.apache.flink.table.catalog.CatalogTableImpl.toProperties(CatalogTableImpl.java:75)
>  at 
> org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:85)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:688)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersist(LocalExecutor.java:488)
>  at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:601)
>  at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:385)
>  at java.util.Optional.ifPresent(Optional.java:159)
>  at 
> org.apache.flink.table.client.cli.CliClient.submitSQLFile(CliClient.java:271)
>  at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:125)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:104)
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:180)
> *The input data is:*
>  123,456
>  123,4567890123456789
>  4567890123456789,123
>  4567890123456789,4567890123456789
>  4567890123456789,-4567890123456789



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15289) Run sql appear error of "Zero-length character strings have no serializable string representation".

2019-12-16 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15289:
--

 Summary: Run sql appear error of "Zero-length character strings 
have no serializable string representation".
 Key: FLINK-15289
 URL: https://issues.apache.org/jira/browse/FLINK-15289
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy


*The sql is:*
 CREATE TABLE `INT8_TBL` (
 q1 BIGINT,
 q2 BIGINT
 ) WITH (
 'format.field-delimiter'=',',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/defender_test_data/daily_regression_batch_postgres_1.10/test_bigint/sources/INT8_TBL.csv',
 'format.type'='csv'
 );

SELECT '' AS five, q1 AS plus, -q1 AS xm FROM INT8_TBL;

*The error detail is :*
 2019-12-17 15:35:07,026 ERROR org.apache.flink.table.client.SqlClient - SQL 
Client must stop. Unexpected exception. This is a bug. Please consider filing 
an issue.
 org.apache.flink.table.api.TableException: Zero-length character strings have 
no serializable string representation.
 at 
org.apache.flink.table.types.logical.CharType.asSerializableString(CharType.java:116)
 at 
org.apache.flink.table.descriptors.DescriptorProperties.putTableSchema(DescriptorProperties.java:218)
 at 
org.apache.flink.table.catalog.CatalogTableImpl.toProperties(CatalogTableImpl.java:75)
 at 
org.apache.flink.table.factories.TableFactoryUtil.findAndCreateTableSink(TableFactoryUtil.java:85)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:688)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersist(LocalExecutor.java:488)
 at org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:601)
 at org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:385)
 at java.util.Optional.ifPresent(Optional.java:159)
 at 
org.apache.flink.table.client.cli.CliClient.submitSQLFile(CliClient.java:271)
 at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:125)
 at org.apache.flink.table.client.SqlClient.start(SqlClient.java:104)
 at org.apache.flink.table.client.SqlClient.main(SqlClient.java:180)

*The input data is:*
 123,456
 123,4567890123456789
 4567890123456789,123
 4567890123456789,4567890123456789
 4567890123456789,-4567890123456789



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-15284) Sql error (Failed to push project into table source!)

2019-12-16 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15284:
---
Comment: was deleted

(was: [~ykt836])

> Sql error (Failed to push project into table source!)
> -
>
> Key: FLINK-15284
> URL: https://issues.apache.org/jira/browse/FLINK-15284
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> *The sql is:*
> CREATE TABLE `t` (
>  x INT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
>  'format.type'='csv'
>  );
> SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;
> *The exception is:*
> [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.TableException: Failed to push project into table 
> source! table source with pushdown capability must override and change 
> explainSource() API to explain the pushdown applied!
>  
>  
> *The whole exception is:*
> Caused by: org.apache.flink.table.api.TableException: Sql optimization: 
> Cannot generate a valid execution plan for the given query:Caused by: 
> org.apache.flink.table.api.TableException: Sql optimization: Cannot generate 
> a valid execution plan for the given query:
>  
> LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
>  fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])+   - 
> LogicalTableScan(table=[[default_catalog, default_database, t, source: 
> [CsvTableSource(read fields: x)]]])
>  Failed to push project into table source! table source with pushdown 
> capability must override and change explainSource() API to explain the 
> pushdown applied!Please check the documentation for the set of currently 
> supported SQL features. at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
>  at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) 
> at 
> 

[jira] [Commented] (FLINK-15284) Sql error (Failed to push project into table source!)

2019-12-16 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997763#comment-16997763
 ] 

xiaojin.wy commented on FLINK-15284:


This is the same one with https://issues.apache.org/jira/browse/FLINK-15092

> Sql error (Failed to push project into table source!)
> -
>
> Key: FLINK-15284
> URL: https://issues.apache.org/jira/browse/FLINK-15284
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> *The sql is:*
> CREATE TABLE `t` (
>  x INT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
>  'format.type'='csv'
>  );
> SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;
> *The exception is:*
> [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.TableException: Failed to push project into table 
> source! table source with pushdown capability must override and change 
> explainSource() API to explain the pushdown applied!
>  
>  
> *The whole exception is:*
> Caused by: org.apache.flink.table.api.TableException: Sql optimization: 
> Cannot generate a valid execution plan for the given query:Caused by: 
> org.apache.flink.table.api.TableException: Sql optimization: Cannot generate 
> a valid execution plan for the given query:
>  
> LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
>  fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])+   - 
> LogicalTableScan(table=[[default_catalog, default_database, t, source: 
> [CsvTableSource(read fields: x)]]])
>  Failed to push project into table source! table source with pushdown 
> capability must override and change explainSource() API to explain the 
> pushdown applied!Please check the documentation for the set of currently 
> supported SQL features. at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
>  at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) 
> at 
> 

[jira] [Commented] (FLINK-15284) Sql error (Failed to push project into table source!)

2019-12-16 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997759#comment-16997759
 ] 

xiaojin.wy commented on FLINK-15284:


[~ykt836]

> Sql error (Failed to push project into table source!)
> -
>
> Key: FLINK-15284
> URL: https://issues.apache.org/jira/browse/FLINK-15284
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> *The sql is:*
> CREATE TABLE `t` (
>  x INT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
>  'format.type'='csv'
>  );
> SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;
> *The exception is:*
> [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.TableException: Failed to push project into table 
> source! table source with pushdown capability must override and change 
> explainSource() API to explain the pushdown applied!
>  
>  
> *The whole exception is:*
> Caused by: org.apache.flink.table.api.TableException: Sql optimization: 
> Cannot generate a valid execution plan for the given query:Caused by: 
> org.apache.flink.table.api.TableException: Sql optimization: Cannot generate 
> a valid execution plan for the given query:
>  
> LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
>  fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])+   - 
> LogicalTableScan(table=[[default_catalog, default_database, t, source: 
> [CsvTableSource(read fields: x)]]])
>  Failed to push project into table source! table source with pushdown 
> capability must override and change explainSource() API to explain the 
> pushdown applied!Please check the documentation for the set of currently 
> supported SQL features. at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
>  at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) 
> at 
> 

[jira] [Created] (FLINK-15284) Sql error (Failed to push project into table source!)

2019-12-16 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15284:
--

 Summary: Sql error (Failed to push project into table source!)
 Key: FLINK-15284
 URL: https://issues.apache.org/jira/browse/FLINK-15284
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy


*The sql is:*

CREATE TABLE `t` (
 x INT
 ) WITH (
 'format.field-delimiter'=',',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='hdfs://zthdev/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
 'format.type'='csv'
 );

SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;

*The exception is:*

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Failed to push project into table 
source! table source with pushdown capability must override and change 
explainSource() API to explain the pushdown applied!

 

 

*The whole exception is:***

Caused by: org.apache.flink.table.api.TableException: Sql optimization: Cannot 
generate a valid execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Sql optimization: Cannot generate a 
valid execution plan for the given query:
LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
 fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])   +- 
LogicalTableScan(table=[[default_catalog, default_database, t, source: 
[CsvTableSource(read fields: x)]]])
Failed to push project into table source! table source with pushdown capability 
must override and change explainSource() API to explain the pushdown 
applied!Please check the documentation for the set of currently supported SQL 
features. at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at scala.collection.immutable.List.foreach(List.scala:392) at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryAndPersistInternal$14(LocalExecutor.java:701)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:231)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:699)
 ... 8 moreCaused by: org.apache.flink.table.api.TableException: Failed to push 
project into table source! table source with 

[jira] [Updated] (FLINK-15284) Sql error (Failed to push project into table source!)

2019-12-16 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15284:
---
Description: 
*The sql is:*

CREATE TABLE `t` (
 x INT
 ) WITH (
 'format.field-delimiter'=',',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
 'format.type'='csv'
 );

SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;

*The exception is:*

[ERROR] Could not execute SQL statement. Reason:
 org.apache.flink.table.api.TableException: Failed to push project into table 
source! table source with pushdown capability must override and change 
explainSource() API to explain the pushdown applied!

 

 

*The whole exception is:*

Caused by: org.apache.flink.table.api.TableException: Sql optimization: Cannot 
generate a valid execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Sql optimization: Cannot generate a 
valid execution plan for the given query:
 
LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
 fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])+   - 
LogicalTableScan(table=[[default_catalog, default_database, t, source: 
[CsvTableSource(read fields: x)]]])
 Failed to push project into table source! table source with pushdown 
capability must override and change explainSource() API to explain the pushdown 
applied!Please check the documentation for the set of currently supported SQL 
features. at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at scala.collection.immutable.List.foreach(List.scala:392) at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryAndPersistInternal$14(LocalExecutor.java:701)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:231)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:699)
 ... 8 moreCaused by: org.apache.flink.table.api.TableException: Failed to push 
project into table source! table source with pushdown capability must override 
and change explainSource() API to explain the pushdown applied! at 

[jira] [Closed] (FLINK-15246) Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL] not equal to TableSink schema: [EXPR$0: TIMESTAMP(3)]

2019-12-16 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy closed FLINK-15246.
--
Resolution: Fixed

> Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL]   not equal to TableSink 
> schema:[EXPR$0: TIMESTAMP(3)]
> ---
>
> Key: FLINK-15246
> URL: https://issues.apache.org/jira/browse/FLINK-15246
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> When I excute the sql below and check the result of it, the "Query result 
> schema" is not equal to the "TableSink schema";
>  
>  
> The sql is:
> CREATE TABLE `t` (
>  x INT
> ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_case_when_coercion/sources/t.csv',
>  'format.type'='csv'
> );
> SELECT CASE WHEN true THEN cast('2017-12-12 09:30:00.0' as timestamp) ELSE 
> cast(2 as tinyint) END FROM t;
>  
> The exception is:
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink 
> `default_catalog`.`default_database`.`_tmp_table_443938765` do not match. 
> Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL] TableSink schema: 
> [EXPR$0: TIMESTAMP(3)]
>  
> The input data is:
> 1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15246) Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL] not equal to TableSink schema: [EXPR$0: TIMESTAMP(3)]

2019-12-16 Thread xiaojin.wy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997229#comment-16997229
 ] 

xiaojin.wy commented on FLINK-15246:


This issue has not appear any more when I user the newest code.

> Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL]   not equal to TableSink 
> schema:[EXPR$0: TIMESTAMP(3)]
> ---
>
> Key: FLINK-15246
> URL: https://issues.apache.org/jira/browse/FLINK-15246
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.10.0
>
>
> When I excute the sql below and check the result of it, the "Query result 
> schema" is not equal to the "TableSink schema";
>  
>  
> The sql is:
> CREATE TABLE `t` (
>  x INT
> ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_case_when_coercion/sources/t.csv',
>  'format.type'='csv'
> );
> SELECT CASE WHEN true THEN cast('2017-12-12 09:30:00.0' as timestamp) ELSE 
> cast(2 as tinyint) END FROM t;
>  
> The exception is:
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink 
> `default_catalog`.`default_database`.`_tmp_table_443938765` do not match. 
> Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL] TableSink schema: 
> [EXPR$0: TIMESTAMP(3)]
>  
> The input data is:
> 1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15246) Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL] not equal to TableSink schema: [EXPR$0: TIMESTAMP(3)]

2019-12-13 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15246:
--

 Summary: Query result schema: [EXPR$0: TIMESTAMP(6) NOT NULL]   
not equal to TableSink schema:[EXPR$0: TIMESTAMP(3)]
 Key: FLINK-15246
 URL: https://issues.apache.org/jira/browse/FLINK-15246
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy


When I excute the sql below and check the result of it, the "Query result 
schema" is not equal to the "TableSink schema";

 

 

The sql is:

CREATE TABLE `t` (
 x INT
) WITH (
 'format.field-delimiter'=',',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_case_when_coercion/sources/t.csv',
 'format.type'='csv'
);

SELECT CASE WHEN true THEN cast('2017-12-12 09:30:00.0' as timestamp) ELSE 
cast(2 as tinyint) END FROM t;

 

The exception is:

org.apache.flink.table.api.ValidationException: Field types of query result and 
registered TableSink 
`default_catalog`.`default_database`.`_tmp_table_443938765` do not match. Query 
result schema: [EXPR$0: TIMESTAMP(6) NOT NULL] TableSink schema: [EXPR$0: 
TIMESTAMP(3)]

 

The input data is:

1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15238) A sql can't generate a valid execution plan

2019-12-12 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy closed FLINK-15238.
--
Resolution: Invalid

> A sql can't generate a valid execution plan
> ---
>
> Key: FLINK-15238
> URL: https://issues.apache.org/jira/browse/FLINK-15238
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Priority: Major
>
> *The table and the query is like this:*
> CREATE TABLE `scott_emp` (
> > empno INT,
> > ename VARCHAR,
> > job VARCHAR,
> > mgr INT,
> > hiredate DATE,
> > sal DOUBLE,
> > comm DOUBLE,
> > deptno INT
> > ) WITH (
> > 'format.field-delimiter'='|',
> > 'connector.type'='filesystem',
> > 'format.derive-schema'='true',
> > 'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_scalar/sources/scott_emp.csv',
> > 'format.type'='csv'
> > );
> CREATE TABLE `scott_dept` (
> > deptno INT,
> > dname VARCHAR,
> > loc VARCHAR
> > ) WITH (
> > 'format.field-delimiter'='|',
> > 'connector.type'='filesystem',
> > 'format.derive-schema'='true',
> > 'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
> > 'format.type'='csv'
> > );
> select deptno, (select empno from scott_emp order by empno limit 1) as x from 
> scott_dept;
>  
>  
> *After execution the sql, the exception will appear:*
> [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query:
> LogicalProject(deptno=[$0], x=[$3])
>  LogicalJoin(condition=[true], joinType=[left])
>  LogicalTableScan(table=[[default_catalog, default_database, scott_dept]])
>  LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])
>  LogicalProject(empno=[$0])
>  LogicalTableScan(table=[[default_catalog, default_database, scott_emp]])
> This exception indicates that the query uses an unsupported SQL feature.
>  Please check the documentation for the set of currently supported SQL 
> features.
>  
>  
> *The whole exception is:*
> Caused by: org.apache.flink.table.api.TableException: Cannot generate a valid 
> execution plan for the given query:Caused by: 
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query:
>  LogicalProject(deptno=[$0], x=[$3])  LogicalJoin(condition=[true], 
> joinType=[left])    LogicalTableScan(table=[[default_catalog, 
> default_database, scott_dept]])    LogicalSort(sort0=[$0], dir0=[ASC], 
> fetch=[1])      LogicalProject(empno=[$0])        
> LogicalTableScan(table=[[default_catalog, default_database, scott_emp]])
>  This exception indicates that the query uses an unsupported SQL 
> feature.Please check the documentation for the set of currently supported SQL 
> features. at 
> org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:284) 
> at 
> org.apache.flink.table.plan.Optimizer.optimizeLogicalPlan(Optimizer.scala:199)
>  at 
> org.apache.flink.table.plan.StreamOptimizer.optimize(StreamOptimizer.scala:66)
>  at 
> org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:389)
>  at 
> org.apache.flink.table.planner.StreamPlanner.writeToRetractSink(StreamPlanner.scala:308)
>  at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$writeToSink(StreamPlanner.scala:272)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:166)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:145)
>  at scala.Option.map(Option.scala:146) at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:145)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
>  at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
>  at 
> 

[jira] [Updated] (FLINK-15238) A sql can't generate a valid execution plan

2019-12-12 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15238:
---
Description: 
*The table and the query is like this:*

CREATE TABLE `scott_emp` (
> empno INT,
> ename VARCHAR,
> job VARCHAR,
> mgr INT,
> hiredate DATE,
> sal DOUBLE,
> comm DOUBLE,
> deptno INT
> ) WITH (
> 'format.field-delimiter'='|',
> 'connector.type'='filesystem',
> 'format.derive-schema'='true',
> 'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_scalar/sources/scott_emp.csv',
> 'format.type'='csv'
> );

CREATE TABLE `scott_dept` (
> deptno INT,
> dname VARCHAR,
> loc VARCHAR
> ) WITH (
> 'format.field-delimiter'='|',
> 'connector.type'='filesystem',
> 'format.derive-schema'='true',
> 'connector.path'='/defender_test_data/daily_regression_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
> 'format.type'='csv'
> );

select deptno, (select empno from scott_emp order by empno limit 1) as x from 
scott_dept;

 

 

*After execution the sql, the exception will appear:*

[ERROR] Could not execute SQL statement. Reason:
 org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:

LogicalProject(deptno=[$0], x=[$3])
 LogicalJoin(condition=[true], joinType=[left])
 LogicalTableScan(table=[[default_catalog, default_database, scott_dept]])
 LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])
 LogicalProject(empno=[$0])
 LogicalTableScan(table=[[default_catalog, default_database, scott_emp]])

This exception indicates that the query uses an unsupported SQL feature.
 Please check the documentation for the set of currently supported SQL features.

 

 

*The whole exception is:*

Caused by: org.apache.flink.table.api.TableException: Cannot generate a valid 
execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:
 LogicalProject(deptno=[$0], x=[$3])  LogicalJoin(condition=[true], 
joinType=[left])    LogicalTableScan(table=[[default_catalog, default_database, 
scott_dept]])    LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])      
LogicalProject(empno=[$0])        LogicalTableScan(table=[[default_catalog, 
default_database, scott_emp]])
 This exception indicates that the query uses an unsupported SQL feature.Please 
check the documentation for the set of currently supported SQL features. at 
org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:284) at 
org.apache.flink.table.plan.Optimizer.optimizeLogicalPlan(Optimizer.scala:199) 
at 
org.apache.flink.table.plan.StreamOptimizer.optimize(StreamOptimizer.scala:66) 
at 
org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:389)
 at 
org.apache.flink.table.planner.StreamPlanner.writeToRetractSink(StreamPlanner.scala:308)
 at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$writeToSink(StreamPlanner.scala:272)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:166)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:145)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:145)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$12(LocalExecutor.java:640)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:227)
 at 

[jira] [Updated] (FLINK-15238) A sql can't generate a valid execution plan

2019-12-12 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15238:
---
Description: 
The table and the query is like this:

CREATE TABLE `scott_emp` (CREATE TABLE `scott_emp` ( empno INT, ename VARCHAR, 
job VARCHAR, mgr INT, hiredate DATE, sal DOUBLE, comm DOUBLE, deptno INT) WITH 
( 'format.field-delimiter'='|', 'connector.type'='filesystem', 
'format.derive-schema'='true', 
'connector.path'='hdfs://zthdev/defender_test_data/daily_regression_blink_sql_1.10/test_scalar/sources/scott_emp.csv',
 'format.type'='csv'); 

 

After execution the sql, the exception will appear:

[ERROR] Could not execute SQL statement. Reason:
 org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:

LogicalProject(deptno=[$0], x=[$3])
 LogicalJoin(condition=[true], joinType=[left])
 LogicalTableScan(table=[[default_catalog, default_database, scott_dept]])
 LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])
 LogicalProject(empno=[$0])
 LogicalTableScan(table=[[default_catalog, default_database, scott_emp]])

This exception indicates that the query uses an unsupported SQL feature.
 Please check the documentation for the set of currently supported SQL features.

 

 

The whole exception is:

Caused by: org.apache.flink.table.api.TableException: Cannot generate a valid 
execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:
 LogicalProject(deptno=[$0], x=[$3])  LogicalJoin(condition=[true], 
joinType=[left])    LogicalTableScan(table=[[default_catalog, default_database, 
scott_dept]])    LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])      
LogicalProject(empno=[$0])        LogicalTableScan(table=[[default_catalog, 
default_database, scott_emp]])
 This exception indicates that the query uses an unsupported SQL feature.Please 
check the documentation for the set of currently supported SQL features. at 
org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:284) at 
org.apache.flink.table.plan.Optimizer.optimizeLogicalPlan(Optimizer.scala:199) 
at 
org.apache.flink.table.plan.StreamOptimizer.optimize(StreamOptimizer.scala:66) 
at 
org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:389)
 at 
org.apache.flink.table.planner.StreamPlanner.writeToRetractSink(StreamPlanner.scala:308)
 at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$writeToSink(StreamPlanner.scala:272)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:166)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:145)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:145)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$12(LocalExecutor.java:640)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:227)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:638)
 ... 8 more

  was:
The table and the query is like this:

 

 

After execution the sql, the exception will appear:

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:

LogicalProject(deptno=[$0], x=[$3])
 LogicalJoin(condition=[true], joinType=[left])
 LogicalTableScan(table=[[default_catalog, 

[jira] [Created] (FLINK-15238) A sql can't generate a valid execution plan

2019-12-12 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15238:
--

 Summary: A sql can't generate a valid execution plan
 Key: FLINK-15238
 URL: https://issues.apache.org/jira/browse/FLINK-15238
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: xiaojin.wy


The table and the query is like this:

 

 

After execution the sql, the exception will appear:

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:

LogicalProject(deptno=[$0], x=[$3])
 LogicalJoin(condition=[true], joinType=[left])
 LogicalTableScan(table=[[default_catalog, default_database, scott_dept]])
 LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])
 LogicalProject(empno=[$0])
 LogicalTableScan(table=[[default_catalog, default_database, scott_emp]])

This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.

 

 

The whole exception is:

Caused by: org.apache.flink.table.api.TableException: Cannot generate a valid 
execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:
LogicalProject(deptno=[$0], x=[$3])  LogicalJoin(condition=[true], 
joinType=[left])    LogicalTableScan(table=[[default_catalog, default_database, 
scott_dept]])    LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])      
LogicalProject(empno=[$0])        LogicalTableScan(table=[[default_catalog, 
default_database, scott_emp]])
This exception indicates that the query uses an unsupported SQL feature.Please 
check the documentation for the set of currently supported SQL features. at 
org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:284) at 
org.apache.flink.table.plan.Optimizer.optimizeLogicalPlan(Optimizer.scala:199) 
at 
org.apache.flink.table.plan.StreamOptimizer.optimize(StreamOptimizer.scala:66) 
at 
org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:389)
 at 
org.apache.flink.table.planner.StreamPlanner.writeToRetractSink(StreamPlanner.scala:308)
 at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$writeToSink(StreamPlanner.scala:272)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:166)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:145)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:145)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$12(LocalExecutor.java:640)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:227)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:638)
 ... 8 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15217) 'java.time.LocalDate' should support for the CSV input format.

2019-12-12 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15217:
---
Description: 
*The sql is like this:*

CREATE TABLE `DATE_TBL` (
 > f1 date
 > ) WITH (
 > 'format.field-delimiter'='|',
 > 'connector.type'='filesystem',
 > 'format.derive-schema'='true',
 > 'connector.path'='/defender_test_data/daily/test_date/sources/DATE_TBL.csv',
 > 'format.type'='csv'
 > );

SELECT f1 AS Fifteen FROM DATE_TBL;

 

*After excute the sql, there will be a exception :*

[ERROR] Could not execute SQL statement. Reason:
 java.lang.IllegalArgumentException: The type 'java.time.LocalDate' is not 
supported for the CSV input format.

*The input file's content is:*

1957-04-09
 1957-06-13
 1996-02-28
 1996-02-29
 1996-03-01
 1996-03-02
 1997-02-28
 1997-03-01
 1997-03-02
 2000-04-01
 2000-04-02
 2000-04-03
 2038-04-08
 2039-04-09

 

*The whole exception is:*

Caused by: java.lang.IllegalArgumentException: The type 'java.time.LocalDate' 
is not supported for the CSV input format.Caused by: 
java.lang.IllegalArgumentException: The type 'java.time.LocalDate' is not 
supported for the CSV input format. at 
org.apache.flink.api.common.io.GenericCsvInputFormat.setFieldsGeneric(GenericCsvInputFormat.java:289)
 at 
org.apache.flink.api.java.io.RowCsvInputFormat.(RowCsvInputFormat.java:64)
 at 
org.apache.flink.table.sources.CsvTableSource$CsvInputFormatConfig.createInputFormat(CsvTableSource.java:518)
 at 
org.apache.flink.table.sources.CsvTableSource.getDataStream(CsvTableSource.java:182)
 at 
org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.translateToPlan(StreamTableSourceScan.scala:97)
 at 
org.apache.flink.table.planner.StreamPlanner.translateToCRow(StreamPlanner.scala:251)
 at 
org.apache.flink.table.planner.StreamPlanner.translateOptimized(StreamPlanner.scala:410)
 at 
org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:400)
 at 
org.apache.flink.table.planner.StreamPlanner.writeToRetractSink(StreamPlanner.scala:308)
 at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$writeToSink(StreamPlanner.scala:272)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:166)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:145)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:145)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$12(LocalExecutor.java:640)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:227)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:638)

 

 

  was:
*The sql is like this:*

CREATE TABLE `DATE_TBL` (
 > f1 date
 > ) WITH (
 > 'format.field-delimiter'='|',
 > 'connector.type'='filesystem',
 > 'format.derive-schema'='true',
 > 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_date/sources/DATE_TBL.csv',
 > 'format.type'='csv'
 > );

SELECT f1 AS Fifteen FROM DATE_TBL;

 

*After excute the sql, there will be a exception :*

[ERROR] Could not execute SQL statement. Reason:
 java.lang.IllegalArgumentException: The type 'java.time.LocalDate' is not 
supported for the CSV input format.

*The input file's content is:*

1957-04-09
 1957-06-13
 1996-02-28
 1996-02-29
 1996-03-01
 1996-03-02
 1997-02-28
 1997-03-01
 1997-03-02
 2000-04-01
 2000-04-02
 2000-04-03
 2038-04-08
 2039-04-09

 

*The whole exception is:*


[jira] [Created] (FLINK-15217) 'java.time.LocalDate' should support for the CSV input format.

2019-12-11 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-15217:
--

 Summary: 'java.time.LocalDate' should support for the CSV input 
format.
 Key: FLINK-15217
 URL: https://issues.apache.org/jira/browse/FLINK-15217
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.10.0
Reporter: xiaojin.wy


*The sql is like this:*

CREATE TABLE `DATE_TBL` (
 > f1 date
 > ) WITH (
 > 'format.field-delimiter'='|',
 > 'connector.type'='filesystem',
 > 'format.derive-schema'='true',
 > 'connector.path'='hdfs://zthdev/defender_test_data/daily/test_date/sources/DATE_TBL.csv',
 > 'format.type'='csv'
 > );

SELECT f1 AS Fifteen FROM DATE_TBL;

 

*After excute the sql, there will be a exception :*

[ERROR] Could not execute SQL statement. Reason:
 java.lang.IllegalArgumentException: The type 'java.time.LocalDate' is not 
supported for the CSV input format.

*The input file's content is:*

1957-04-09
 1957-06-13
 1996-02-28
 1996-02-29
 1996-03-01
 1996-03-02
 1997-02-28
 1997-03-01
 1997-03-02
 2000-04-01
 2000-04-02
 2000-04-03
 2038-04-08
 2039-04-09

 

*The whole exception is:*

Caused by: java.lang.IllegalArgumentException: The type 'java.time.LocalDate' 
is not supported for the CSV input format.Caused by: 
java.lang.IllegalArgumentException: The type 'java.time.LocalDate' is not 
supported for the CSV input format. at 
org.apache.flink.api.common.io.GenericCsvInputFormat.setFieldsGeneric(GenericCsvInputFormat.java:289)
 at 
org.apache.flink.api.java.io.RowCsvInputFormat.(RowCsvInputFormat.java:64)
 at 
org.apache.flink.table.sources.CsvTableSource$CsvInputFormatConfig.createInputFormat(CsvTableSource.java:518)
 at 
org.apache.flink.table.sources.CsvTableSource.getDataStream(CsvTableSource.java:182)
 at 
org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.translateToPlan(StreamTableSourceScan.scala:97)
 at 
org.apache.flink.table.planner.StreamPlanner.translateToCRow(StreamPlanner.scala:251)
 at 
org.apache.flink.table.planner.StreamPlanner.translateOptimized(StreamPlanner.scala:410)
 at 
org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:400)
 at 
org.apache.flink.table.planner.StreamPlanner.writeToRetractSink(StreamPlanner.scala:308)
 at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$writeToSink(StreamPlanner.scala:272)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:166)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:145)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:145)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$12(LocalExecutor.java:640)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:227)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:638)

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >