[jira] [Comment Edited] (FLINK-9906) Flink Job not running with no resource

2018-07-21 Thread Congxian Qiu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551887#comment-16551887
 ] 

Congxian Qiu edited comment on FLINK-9906 at 7/22/18 3:41 AM:
--

>From the given log, it seems your application can't gotten all the resources 
>it want.
{code:java}
2018-07-22 09:51:15,247 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Job  
(10dd71dff6033ee1dd613e9ccf854c29) switched from state RUNNING to FAILING.
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: 
Could not allocate all requires slots within timeout of 30 ms. Slots 
required: 1, slots allocated: 0
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.lambda$scheduleEager$3(ExecutionGraph.java:984)
at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.runtime.concurrent.FutureUtils$ResultConjunctFuture.handleCompletedFuture(FutureUtils.java:553)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:811)
at akka.dispatch.OnComplete.internal(Future.scala:258)
at akka.dispatch.OnComplete.internal(Future.scala:256)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at 
org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:534)
at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:20)
at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:18)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at 
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107){code}


was (Author: klion26):
>From the given log, it seems your application can't gotten all the resources 
>it want.

> Flink Job not running with no resource
> --
>
> Key: FLINK-9906
> URL: https://issues.apache.org/jira/browse/FLINK-9906
> Project: Flink
>  Issue Type: Bug
>  Components: Scheduler
>Affects Versions: 1.5.1
>Reporter: godfrey johnson
>Priority: Major
> Attachments: Flink Job Not Running.log
>
>
> Flink job was submitted to yarn, jobmanager was running, but job was stucked 
> in created(scheduled) status from flink web UI. And no taskmanager is 
> running.[^Flink Job Not Running.log]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9906) Flink Job not running with no resource

2018-07-21 Thread Congxian Qiu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551887#comment-16551887
 ] 

Congxian Qiu commented on FLINK-9906:
-

>From the given log, it seems your application can't gotten all the resources 
>it want.

> Flink Job not running with no resource
> --
>
> Key: FLINK-9906
> URL: https://issues.apache.org/jira/browse/FLINK-9906
> Project: Flink
>  Issue Type: Bug
>  Components: Scheduler
>Affects Versions: 1.5.1
>Reporter: godfrey johnson
>Priority: Major
> Attachments: Flink Job Not Running.log
>
>
> Flink job was submitted to yarn, jobmanager was running, but job was stucked 
> in created(scheduled) status from flink web UI. And no taskmanager is 
> running.[^Flink Job Not Running.log]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9134) Update Calcite dependency to 1.17

2018-07-21 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551884#comment-16551884
 ] 

Rong Rong commented on FLINK-9134:
--

Calcite 1.17 has been 
[released|https://mail-archives.apache.org/mod_mbox/calcite-dev/201807.mbox/%3CCAHFToO04LF6opte0%3DNYvb3q9145jmxq%2BFHAdv7mBUv844W66xg%40mail.gmail.com%3E].

> Update Calcite dependency to 1.17
> -
>
> Key: FLINK-9134
> URL: https://issues.apache.org/jira/browse/FLINK-9134
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>
> This is an umbrella issue for tasks that need to be performed when upgrading 
> to Calcite 1.17 once it is released.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9906) Flink Job not running with no resource

2018-07-21 Thread godfrey johnson (JIRA)
godfrey johnson created FLINK-9906:
--

 Summary: Flink Job not running with no resource
 Key: FLINK-9906
 URL: https://issues.apache.org/jira/browse/FLINK-9906
 Project: Flink
  Issue Type: Bug
  Components: Scheduler
Affects Versions: 1.5.1
Reporter: godfrey johnson
 Attachments: Flink Job Not Running.log

Flink job was submitted to yarn, jobmanager was running, but job was stucked in 
created(scheduled) status from flink web UI. And no taskmanager is 
running.[^Flink Job Not Running.log]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready mechanism for BucketingSink when checkpoint complete

2018-07-21 Thread zhangminglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangminglei updated FLINK-9609:

Fix Version/s: 1.6.1

> Add bucket ready mechanism for BucketingSink when checkpoint complete
> -
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector, Streaming Connectors
>Affects Versions: 1.5.0, 1.4.2
>Reporter: zhangminglei
>Assignee: zhangminglei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.1
>
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. However, 
> users want to do some extra work when a bucket is ready. It would be nice if 
> we can support {{BucketReady}} mechanism for users or we can tell users when 
> a bucket is ready for use. For example, One bucket is created for every 5 
> minutes, at the end of 5 minutes before creating the next bucket, the user 
> might need to do something as the previous bucket ready, like sending the 
> timestamp of the bucket ready time to a server or do some other stuff.
> Here, Bucket ready means all the part files suffix name under a bucket 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready for user use. Like a watermark means no elements with a timestamp older 
> or equal to the watermark timestamp should arrive at the window. We can also 
> refer to the concept of watermark here, or we can call this *BucketWatermark* 
> if we could.
> Recently, I found a user who wants this functionality which I would think.
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Let-BucketingSink-roll-file-on-each-checkpoint-td19034.html
> Below is what he said:
> My user case is we read data from message queue, write to HDFS, and our ETL 
> team will use the data in HDFS. *In the case, ETL need to know if all data is 
> ready to be read accurately*, so we use a counter to count how many data has 
> been wrote, if the counter is equal to the number we received, we think HDFS 
> file is ready. We send the counter message in a custom sink so ETL can know 
> how many data has been wrote, but if use current BucketingSink, even through 
> HDFS file is flushed, ETL may still cannot read the data. If we can close 
> file during checkpoint, then the result is accurately. And for the HDFS small 
> file problem, it can be controller by use bigger checkpoint interval. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6378: [FLINK-9236] [pom] upgrade the version of apache parent p...

2018-07-21 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/6378
  
Could you please push code and trigger the travis again ? 


---


[jira] [Commented] (FLINK-9236) Use Apache Parent POM 19

2018-07-21 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551857#comment-16551857
 ] 

ASF GitHub Bot commented on FLINK-9236:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/6378
  
Could you please push code and trigger the travis again ? 


> Use Apache Parent POM 19
> 
>
> Key: FLINK-9236
> URL: https://issues.apache.org/jira/browse/FLINK-9236
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: jiayichao
>Priority: Major
>  Labels: pull-request-available
>
> Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out.
> This will also fix Javadoc generation with JDK 10+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-7795) Utilize error-prone to discover common coding mistakes

2018-07-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345955#comment-16345955
 ] 

Ted Yu edited comment on FLINK-7795 at 7/21/18 9:26 PM:


error-prone has JDK 8 dependency .


was (Author: yuzhih...@gmail.com):
error-prone has JDK 8 dependency.

> Utilize error-prone to discover common coding mistakes
> --
>
> Key: FLINK-7795
> URL: https://issues.apache.org/jira/browse/FLINK-7795
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Priority: Major
>
> http://errorprone.info/ is a tool which detects common coding mistakes.
> We should incorporate into Flink build process.
> Here are the dependencies:
> {code}
> 
>   com.google.errorprone
>   error_prone_annotation
>   ${error-prone.version}
>   provided
> 
> 
>   
>   com.google.auto.service
>   auto-service
>   1.0-rc3
>   true
> 
> 
>   com.google.errorprone
>   error_prone_check_api
>   ${error-prone.version}
>   provided
>   
> 
>   com.google.code.findbugs
>   jsr305
> 
>   
> 
> 
>   com.google.errorprone
>   javac
>   9-dev-r4023-3
>   provided
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9150) Prepare for Java 10

2018-07-21 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16473198#comment-16473198
 ] 

Ted Yu edited comment on FLINK-9150 at 7/21/18 9:25 PM:


Similar error is encountered when building against jdk 11 .


was (Author: yuzhih...@gmail.com):
Similar error is encountered when building against jdk 11.

> Prepare for Java 10
> ---
>
> Key: FLINK-9150
> URL: https://issues.apache.org/jira/browse/FLINK-9150
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: Ted Yu
>Priority: Major
>
> Java 9 is not a LTS release.
> When compiling with Java 10, I see the following compilation error:
> {code}
> [ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not 
> resolve dependencies for project 
> org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find 
> artifact jdk.tools:jdk.tools:jar:1.6 at specified path 
> /a/jdk-10/../lib/tools.jar -> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9363) Bump up the Jackson version

2018-07-21 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-9363:
--
Description: 
CVE's for Jackson:

CVE-2017-17485
CVE-2018-5968
CVE-2018-7489

We can upgrade to 2.9.5

  was:
CVE's for Jackson:

CVE-2017-17485
CVE-2018-5968
CVE-2018-7489


We can upgrade to 2.9.5


> Bump up the Jackson version
> ---
>
> Key: FLINK-9363
> URL: https://issues.apache.org/jira/browse/FLINK-9363
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>  Labels: security
>
> CVE's for Jackson:
> CVE-2017-17485
> CVE-2018-5968
> CVE-2018-7489
> We can upgrade to 2.9.5



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7205) Add UUID supported in TableAPI/SQL

2018-07-21 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551739#comment-16551739
 ] 

ASF GitHub Bot commented on FLINK-7205:
---

GitHub user buptljy opened a pull request:

https://github.com/apache/flink/pull/6381

[FLINK-7205] [table]Add UUID supported in SQL and TableApi

## What is the purpose of the change
* Add UUID supported in SQL and TableApi.
## Brief change log
* Add UUID function.
## Verifying this change
* Unit tests.

## Documentation
* add in table.md and sql.md

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/buptljy/flink FLINK-7205

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6381.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6381


commit 5cde30f8feb2feb49dc1381af3d1d288c39122f0
Author: wind 
Date:   2018-07-21T15:20:21Z

add uuid table function

commit 8829de68bee64c6709d55efd17c09beabdb7a8be
Author: wind 
Date:   2018-07-21T15:32:42Z

add docs for uuid




> Add UUID supported in TableAPI/SQL
> --
>
> Key: FLINK-7205
> URL: https://issues.apache.org/jira/browse/FLINK-7205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: pull-request-available
>
> UUID() returns a value that conforms to UUID version 1 as described in RFC 
> 4122. The value is a 128-bit number represented as a utf8 string of five 
> hexadecimal numbers in ---- format:
> The first three numbers are generated from the low, middle, and high parts of 
> a timestamp. The high part also includes the UUID version number.
> The fourth number preserves temporal uniqueness in case the timestamp value 
> loses monotonicity (for example, due to daylight saving time).
> The fifth number is an IEEE 802 node number that provides spatial uniqueness. 
> A random number is substituted if the latter is not available (for example, 
> because the host device has no Ethernet card, or it is unknown how to find 
> the hardware address of an interface on the host operating system). In this 
> case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision 
> should have very low probability.
> See: [RFC 4122: 
> http://www.ietf.org/rfc/rfc4122.txt|http://www.ietf.org/rfc/rfc4122.txt]
> See detailed semantics:
>MySql: 
> [https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid|https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid]
> Welcome anybody feedback -:).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6381: [FLINK-7205] [table]Add UUID supported in SQL ...

2018-07-21 Thread buptljy
GitHub user buptljy opened a pull request:

https://github.com/apache/flink/pull/6381

[FLINK-7205] [table]Add UUID supported in SQL and TableApi

## What is the purpose of the change
* Add UUID supported in SQL and TableApi.
## Brief change log
* Add UUID function.
## Verifying this change
* Unit tests.

## Documentation
* add in table.md and sql.md

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/buptljy/flink FLINK-7205

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6381.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6381


commit 5cde30f8feb2feb49dc1381af3d1d288c39122f0
Author: wind 
Date:   2018-07-21T15:20:21Z

add uuid table function

commit 8829de68bee64c6709d55efd17c09beabdb7a8be
Author: wind 
Date:   2018-07-21T15:32:42Z

add docs for uuid




---


[jira] [Updated] (FLINK-7205) Add UUID supported in TableAPI/SQL

2018-07-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-7205:
--
Labels: pull-request-available  (was: )

> Add UUID supported in TableAPI/SQL
> --
>
> Key: FLINK-7205
> URL: https://issues.apache.org/jira/browse/FLINK-7205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>  Labels: pull-request-available
>
> UUID() returns a value that conforms to UUID version 1 as described in RFC 
> 4122. The value is a 128-bit number represented as a utf8 string of five 
> hexadecimal numbers in ---- format:
> The first three numbers are generated from the low, middle, and high parts of 
> a timestamp. The high part also includes the UUID version number.
> The fourth number preserves temporal uniqueness in case the timestamp value 
> loses monotonicity (for example, due to daylight saving time).
> The fifth number is an IEEE 802 node number that provides spatial uniqueness. 
> A random number is substituted if the latter is not available (for example, 
> because the host device has no Ethernet card, or it is unknown how to find 
> the hardware address of an interface on the host operating system). In this 
> case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision 
> should have very low probability.
> See: [RFC 4122: 
> http://www.ietf.org/rfc/rfc4122.txt|http://www.ietf.org/rfc/rfc4122.txt]
> See detailed semantics:
>MySql: 
> [https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid|https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid]
> Welcome anybody feedback -:).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7205) Add UUID supported in TableAPI/SQL

2018-07-21 Thread buptljy (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551722#comment-16551722
 ] 

buptljy commented on FLINK-7205:


[~fhueske]

I find that we cannot test _*UUID*_ function in *_ScalarFunctionsTest_* because 
we're not able to offer an expected UUID...

In my opinion, we can test the _*UUID*_ **in another way, like testing the 
length of the string and the position of the seperator "-", though it does not 
look very appropriate.

> Add UUID supported in TableAPI/SQL
> --
>
> Key: FLINK-7205
> URL: https://issues.apache.org/jira/browse/FLINK-7205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>
> UUID() returns a value that conforms to UUID version 1 as described in RFC 
> 4122. The value is a 128-bit number represented as a utf8 string of five 
> hexadecimal numbers in ---- format:
> The first three numbers are generated from the low, middle, and high parts of 
> a timestamp. The high part also includes the UUID version number.
> The fourth number preserves temporal uniqueness in case the timestamp value 
> loses monotonicity (for example, due to daylight saving time).
> The fifth number is an IEEE 802 node number that provides spatial uniqueness. 
> A random number is substituted if the latter is not available (for example, 
> because the host device has no Ethernet card, or it is unknown how to find 
> the hardware address of an interface on the host operating system). In this 
> case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision 
> should have very low probability.
> See: [RFC 4122: 
> http://www.ietf.org/rfc/rfc4122.txt|http://www.ietf.org/rfc/rfc4122.txt]
> See detailed semantics:
>MySql: 
> [https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid|https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid]
> Welcome anybody feedback -:).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8058) Queryable state should check types

2018-07-21 Thread Congxian Qiu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551703#comment-16551703
 ] 

Congxian Qiu commented on FLINK-8058:
-

Hi, [~kkl0u], is this issues needed in Queryable State, if it is needed. I'm 
interested in it.

Could I check the state type and type of contained values in JobMaster or in 
KvStateServerHandler? I perfer adding the statedescriptor in JobMaster and 
check all the things when looking up state location, what about your opinion?

> Queryable state should check types
> --
>
> Key: FLINK-8058
> URL: https://issues.apache.org/jira/browse/FLINK-8058
> Project: Flink
>  Issue Type: Improvement
>  Components: Queryable State
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Priority: Major
>
> The queryable state currently doesn't do any type checks on the client or 
> server and generally relies on serializers to catch errors.
> Neither the type of state is checked (ValueState, ListState etc.) nor the 
> type of contained values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6380: [FLINK-9614] [table] Improve the error message for...

2018-07-21 Thread zhangminglei
GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/6380

[FLINK-9614] [table] Improve the error message for Compiler#compile

## What is the purpose of the change
Improve the error message for Compiler#compile

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-9614-improve-error

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6380.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6380


commit f4406622c67df9212c545d36b3572e774acf1df7
Author: zhangminglei 
Date:   2018-07-21T08:39:16Z

[FLINK-9614] [table] Improve the error message for Compiler#compile




---


[jira] [Commented] (FLINK-9614) Improve the error message for Compiler#compile

2018-07-21 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551586#comment-16551586
 ] 

ASF GitHub Bot commented on FLINK-9614:
---

GitHub user zhangminglei opened a pull request:

https://github.com/apache/flink/pull/6380

[FLINK-9614] [table] Improve the error message for Compiler#compile

## What is the purpose of the change
Improve the error message for Compiler#compile

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangminglei/flink flink-9614-improve-error

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6380.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6380


commit f4406622c67df9212c545d36b3572e774acf1df7
Author: zhangminglei 
Date:   2018-07-21T08:39:16Z

[FLINK-9614] [table] Improve the error message for Compiler#compile




> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: zhangminglei
>Assignee: zhangminglei
>Priority: Major
>  Labels: pull-request-available
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is below, we can 
> solve this by setting -Xss 20m, instead of {{This is a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9614) Improve the error message for Compiler#compile

2018-07-21 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9614:
--
Labels: pull-request-available  (was: )

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: zhangminglei
>Assignee: zhangminglei
>Priority: Major
>  Labels: pull-request-available
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is below, we can 
> solve this by setting -Xss 20m, instead of {{This is a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-07-21 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551576#comment-16551576
 ] 

ASF GitHub Bot commented on FLINK-9185:
---

Github user StephenJeson commented on the issue:

https://github.com/apache/flink/pull/5894
  
@tillrohrmann I have finished the code, could you please help me to review 
this at your convenience.


> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Stephen Jason
>Priority: Minor
>  Labels: pull-request-available
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5894: [FLINK-9185] [runtime] Fix potential null dereference in ...

2018-07-21 Thread StephenJeson
Github user StephenJeson commented on the issue:

https://github.com/apache/flink/pull/5894
  
@tillrohrmann I have finished the code, could you please help me to review 
this at your convenience.


---


[jira] [Commented] (FLINK-7205) Add UUID supported in TableAPI/SQL

2018-07-21 Thread buptljy (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551569#comment-16551569
 ] 

buptljy commented on FLINK-7205:


We can use java.util.UUID(also use RFC 4122) to implement this.

 

> Add UUID supported in TableAPI/SQL
> --
>
> Key: FLINK-7205
> URL: https://issues.apache.org/jira/browse/FLINK-7205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>
> UUID() returns a value that conforms to UUID version 1 as described in RFC 
> 4122. The value is a 128-bit number represented as a utf8 string of five 
> hexadecimal numbers in ---- format:
> The first three numbers are generated from the low, middle, and high parts of 
> a timestamp. The high part also includes the UUID version number.
> The fourth number preserves temporal uniqueness in case the timestamp value 
> loses monotonicity (for example, due to daylight saving time).
> The fifth number is an IEEE 802 node number that provides spatial uniqueness. 
> A random number is substituted if the latter is not available (for example, 
> because the host device has no Ethernet card, or it is unknown how to find 
> the hardware address of an interface on the host operating system). In this 
> case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision 
> should have very low probability.
> See: [RFC 4122: 
> http://www.ietf.org/rfc/rfc4122.txt|http://www.ietf.org/rfc/rfc4122.txt]
> See detailed semantics:
>MySql: 
> [https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid|https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid]
> Welcome anybody feedback -:).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-7205) Add UUID supported in TableAPI/SQL

2018-07-21 Thread buptljy (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

buptljy reassigned FLINK-7205:
--

Assignee: buptljy

> Add UUID supported in TableAPI/SQL
> --
>
> Key: FLINK-7205
> URL: https://issues.apache.org/jira/browse/FLINK-7205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: buptljy
>Priority: Major
>
> UUID() returns a value that conforms to UUID version 1 as described in RFC 
> 4122. The value is a 128-bit number represented as a utf8 string of five 
> hexadecimal numbers in ---- format:
> The first three numbers are generated from the low, middle, and high parts of 
> a timestamp. The high part also includes the UUID version number.
> The fourth number preserves temporal uniqueness in case the timestamp value 
> loses monotonicity (for example, due to daylight saving time).
> The fifth number is an IEEE 802 node number that provides spatial uniqueness. 
> A random number is substituted if the latter is not available (for example, 
> because the host device has no Ethernet card, or it is unknown how to find 
> the hardware address of an interface on the host operating system). In this 
> case, spatial uniqueness cannot be guaranteed. Nevertheless, a collision 
> should have very low probability.
> See: [RFC 4122: 
> http://www.ietf.org/rfc/rfc4122.txt|http://www.ietf.org/rfc/rfc4122.txt]
> See detailed semantics:
>MySql: 
> [https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid|https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_uuid]
> Welcome anybody feedback -:).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)