[jira] [Created] (FLINK-9651) Add a Kafka table source factory with Protobuf format support

2018-06-23 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9651:
---

 Summary: Add a Kafka table source factory with Protobuf format 
support
 Key: FLINK-9651
 URL: https://issues.apache.org/jira/browse/FLINK-9651
 Project: Flink
  Issue Type: Sub-task
  Components: Table API  SQL
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9650) Support Protocol Buffers formats

2018-06-23 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9650:

Description: We need to generate a \{{TypeInformation}} from a standard 
[Protobuf schema|https://github.com/google/protobuf] (and maybe vice verse). 

> Support Protocol Buffers formats
> 
>
> Key: FLINK-9650
> URL: https://issues.apache.org/jira/browse/FLINK-9650
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> We need to generate a \{{TypeInformation}} from a standard [Protobuf 
> schema|https://github.com/google/protobuf] (and maybe vice verse). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9650) Support Protocol Buffers formats

2018-06-23 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9650:
---

 Summary: Support Protocol Buffers formats
 Key: FLINK-9650
 URL: https://issues.apache.org/jira/browse/FLINK-9650
 Project: Flink
  Issue Type: Sub-task
  Components: Streaming Connectors
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-23 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9614:

Affects Version/s: 1.5.0

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is below, we can 
> solve this by setting -Xss 20m, instead of {{This is a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9625) Enrich the error information to user

2018-06-20 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9625:

Component/s: Table API & SQL

> Enrich the error information to user 
> -
>
> Key: FLINK-9625
> URL: https://issues.apache.org/jira/browse/FLINK-9625
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the fields too many, and the types do not match with it would cause the 
> below error. I think we should tell the users which field type does not 
> match. Instead of only tell the users mismatch type which is.
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.TableException: Result 
> field does not match requested type. Requested: String; Actual: Integer
>   at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:953)
>   at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:944)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:944)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.getConversionMapperWithChanges(StreamTableEnvironment.scala:358)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:794)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:734)
>   at 
> org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:414)
>   at 
> org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:357)
>   at router.StreamRouterRunning.main(StreamRouterRunning.java:76)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9625) Enrich the error information to user

2018-06-20 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9625:

Description: 
When the fields too many, and the types do not match with it would cause the 
below error. I think we should tell the users which field type does not match. 
Instead of only tell the users mismatch type which is.

{code:java}
Exception in thread "main" org.apache.flink.table.api.TableException: Result 
field does not match requested type. Requested: String; Actual: Integer
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:953)
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:944)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:944)
at 
org.apache.flink.table.api.StreamTableEnvironment.getConversionMapperWithChanges(StreamTableEnvironment.scala:358)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:794)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:734)
at 
org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:414)
at 
org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:357)
at router.StreamRouterRunning.main(StreamRouterRunning.java:76)
{code}


  was:
When the fields too many, and the types do not match with it would cause the 
below error. I think we should tell the users which field type does not match.

{code:java}
Exception in thread "main" org.apache.flink.table.api.TableException: Result 
field does not match requested type. Requested: String; Actual: Integer
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:953)
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:944)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:944)
at 
org.apache.flink.table.api.StreamTableEnvironment.getConversionMapperWithChanges(StreamTableEnvironment.scala:358)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:794)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:734)
at 
org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:414)
at 
org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:357)
at router.StreamRouterRunning.main(StreamRouterRunning.java:76)
{code}



> Enrich the error information to user 
> -
>
> Key: FLINK-9625
> URL: https://issues.apache.org/jira/browse/FLINK-9625
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the fields too many, and the types do not match with it would cause the 
> below error. I think we should tell the users which field type does not 
> match. Instead of only tell the users mismatch type which is.
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.TableException: Result 
> field does not match requested type. Requested: String; Actual: Integer
>   at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:953)
>   at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:944)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:944)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.getConversionMapperWithChanges(StreamTableEnvironment.scala:358)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:794)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:734)
>   at 
> 

[jira] [Created] (FLINK-9625) Enrich the error information to user

2018-06-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9625:
---

 Summary: Enrich the error information to user 
 Key: FLINK-9625
 URL: https://issues.apache.org/jira/browse/FLINK-9625
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang


When the fields too many, and the types do not match with it would cause the 
below error. I think we should tell the users which field type does not match.

{code:java}
Exception in thread "main" org.apache.flink.table.api.TableException: Result 
field does not match requested type. Requested: String; Actual: Integer
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:953)
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$1.apply(TableEnvironment.scala:944)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:944)
at 
org.apache.flink.table.api.StreamTableEnvironment.getConversionMapperWithChanges(StreamTableEnvironment.scala:358)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:794)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:734)
at 
org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:414)
at 
org.apache.flink.table.api.java.StreamTableEnvironment.toRetractStream(StreamTableEnvironment.scala:357)
at router.StreamRouterRunning.main(StreamRouterRunning.java:76)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-20 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9614:

Comment: was deleted

(was: I close this since we catch a throwable. stackoverflowerror is 
acceptable.)

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is below, we can 
> solve this by setting -Xss 20m, instead of {{This is a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-20 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9614:

Description: 
When the below sql has too long. Like

case when  case when .
 when host in 
('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
 then 'condition'

Then cause the {{StackOverflowError}}. And the current code is below, we can 
solve this by setting -Xss 20m, instead of {{This is a bug..}}

{code:java}
trait Compiler[T] {

  @throws(classOf[CompileException])
  def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
require(cl != null, "Classloader must not be null.")
val compiler = new SimpleCompiler()
compiler.setParentClassLoader(cl)
try {
  compiler.cook(code)
} catch {
  case t: Throwable =>
throw new InvalidProgramException("Table program cannot be compiled. " +
  "This is a bug. Please file an issue.", t)
}
compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
  }
}
{code}


  was:
When the below sql has too long. Like

case when  case when .
 when host in 
('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
 then 'condition'

Then cause the {{StackOverflowError}}. And the current code is, but I would 
suggest prompt users add the {{-Xss 20m}} to solve this, instead of {{This is a 
bug..}}

{code:java}
trait Compiler[T] {

  @throws(classOf[CompileException])
  def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
require(cl != null, "Classloader must not be null.")
val compiler = new SimpleCompiler()
compiler.setParentClassLoader(cl)
try {
  compiler.cook(code)
} catch {
  case t: Throwable =>
throw new InvalidProgramException("Table program cannot be compiled. " +
  "This is a bug. Please file an issue.", t)
}
compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
  }
}
{code}



> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is below, we can 
> solve this by setting -Xss 20m, instead of {{This is a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-20 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reopened FLINK-9614:
-

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is, but I would 
> suggest prompt users add the {{-Xss 20m}} to solve this, instead of {{This is 
> a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-19 Thread mingleizhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517679#comment-16517679
 ] 

mingleizhang commented on FLINK-9614:
-

I close this since we catch a throwable. stackoverflowerror is acceptable.

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is, but I would 
> suggest prompt users add the {{-Xss 20m}} to solve this, instead of {{This is 
> a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-19 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang closed FLINK-9614.
---

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is, but I would 
> suggest prompt users add the {{-Xss 20m}} to solve this, instead of {{This is 
> a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-19 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang resolved FLINK-9614.
-
Resolution: Not A Problem

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is, but I would 
> suggest prompt users add the {{-Xss 20m}} to solve this, instead of {{This is 
> a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-19 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9614:

Description: 
When the below sql has too long. Like

case when  case when .
 when host in 
('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
 then 'condition'

Then cause the {{StackOverflowError}}. And the current code is, but I would 
suggest prompt users add the {{-Xss 20m}} to solve this, instead of {{This is a 
bug..}}

{code:java}
trait Compiler[T] {

  @throws(classOf[CompileException])
  def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
require(cl != null, "Classloader must not be null.")
val compiler = new SimpleCompiler()
compiler.setParentClassLoader(cl)
try {
  compiler.cook(code)
} catch {
  case t: Throwable =>
throw new InvalidProgramException("Table program cannot be compiled. " +
  "This is a bug. Please file an issue.", t)
}
compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
  }
}
{code}


> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> When the below sql has too long. Like
> case when  case when .
>  when host in 
> ('114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247','114.67.56.94','114.67.56.102','114.67.56.103','114.67.56.106','114.67.56.107','183.60.220.231','183.60.220.232','183.60.219.247')
>  then 'condition'
> Then cause the {{StackOverflowError}}. And the current code is, but I would 
> suggest prompt users add the {{-Xss 20m}} to solve this, instead of {{This is 
> a bug..}}
> {code:java}
> trait Compiler[T] {
>   @throws(classOf[CompileException])
>   def compile(cl: ClassLoader, name: String, code: String): Class[T] = {
> require(cl != null, "Classloader must not be null.")
> val compiler = new SimpleCompiler()
> compiler.setParentClassLoader(cl)
> try {
>   compiler.cook(code)
> } catch {
>   case t: Throwable =>
> throw new InvalidProgramException("Table program cannot be compiled. 
> " +
>   "This is a bug. Please file an issue.", t)
> }
> compiler.getClassLoader.loadClass(name).asInstanceOf[Class[T]]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-19 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9614:

Component/s: Table API & SQL

> Improve the error message for Compiler#compile
> --
>
> Key: FLINK-9614
> URL: https://issues.apache.org/jira/browse/FLINK-9614
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9614) Improve the error message for Compiler#compile

2018-06-19 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9614:
---

 Summary: Improve the error message for Compiler#compile
 Key: FLINK-9614
 URL: https://issues.apache.org/jira/browse/FLINK-9614
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Description: 
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or do some 
other stuff.

Here, Bucket ready means all the part files name suffix under a bucket neither 
{{.pending}} nor {{.in-progress}}. Then we can think this bucket is ready for 
user use.

  was:
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff.

Here, Bucket ready means all the part files name suffix under a bucket neither 
{{.pending}} nor {{.in-progress}}. Then we can think this bucket is ready for 
user use.


> Add bucket ready mechanism for BucketingSink when checkpoint complete
> -
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
> want to do some extra work when a bucket is ready. It would be nice if we can 
> support {{BucketReady}} functionality for user. For example, One bucket is 
> created for every 5 minutes, and then at the end of 5 minutes before creating 
> the next bucket, the user needs to do something for the previous bucket 
> ready, such as sending the timestamp of the bucket ready time to a server or 
> do some other stuff.
> Here, Bucket ready means all the part files name suffix under a bucket 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready for user use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Description: 
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff.

Here, Bucket ready means all the part files name suffix under a bucket neither 
{{.pending}} nor {{.in-progress}}. Then we can think this bucket is ready for 
user use.

  was:
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff.

Here, Bucket ready means all the part files name suffix under a bucket status 
neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
ready for user use.


> Add bucket ready mechanism for BucketingSink when checkpoint complete
> -
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
> want to do some extra work when a bucket is ready. It would be nice if we can 
> support {{BucketReady}} functionality for user. For example, One bucket is 
> created for every 5 minutes, and then at the end of 5 minutes before creating 
> the next bucket, the user needs to do something for the previous bucket 
> ready, such as sending the timestamp of the bucket ready time to a server or 
> some other stuff.
> Here, Bucket ready means all the part files name suffix under a bucket 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready for user use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Description: 
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff.

Here, Bucket ready means all the part files name suffix under a bucket status 
neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
ready for user use.

  was:
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff.

Here, Bucket ready means all the part files name suffix under a bucket status 
neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
ready.


> Add bucket ready mechanism for BucketingSink when checkpoint complete
> -
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
> want to do some extra work when a bucket is ready. It would be nice if we can 
> support {{BucketReady}} functionality for user. For example, One bucket is 
> created for every 5 minutes, and then at the end of 5 minutes before creating 
> the next bucket, the user needs to do something for the previous bucket 
> ready, such as sending the timestamp of the bucket ready time to a server or 
> some other stuff.
> Here, Bucket ready means all the part files name suffix under a bucket status 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready for user use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Summary: Add bucket ready mechanism for BucketingSink when checkpoint 
complete  (was: Add bucket ready notification mechanism for BucketingSink when 
checkpoint complete)

> Add bucket ready mechanism for BucketingSink when checkpoint complete
> -
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
> want to do some extra work when a bucket is ready. It would be nice if we can 
> support {{BucketReady}} functionality for user. For example, One bucket is 
> created for every 5 minutes, and then at the end of 5 minutes before creating 
> the next bucket, the user needs to do something for the previous bucket 
> ready, such as sending the timestamp of the bucket ready time to a server or 
> some other stuff.
> Here, Bucket ready means all the part files name suffix under a bucket status 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready notification mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Issue Type: New Feature  (was: Improvement)

> Add bucket ready notification mechanism for BucketingSink when checkpoint 
> complete
> --
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
> want to do some extra work when a bucket is ready. It would be nice if we can 
> support {{BucketReady}} functionality for user. For example, One bucket is 
> created for every 5 minutes, and then at the end of 5 minutes before creating 
> the next bucket, the user needs to do something for the previous bucket 
> ready, such as sending the timestamp of the bucket ready time to a server or 
> some other stuff.
> Here, Bucket ready means all the part files name suffix under a bucket status 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready notification mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Description: 
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff.

Here, Bucket ready means all the part files name suffix under a bucket status 
neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
ready.

  was:
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff. 

Here, Bucket ready means all the part files name under a bucket status neither 
{{.pending}} nor {{.in-progress}}. Then we can think this bucket is ready.


> Add bucket ready notification mechanism for BucketingSink when checkpoint 
> complete
> --
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: Improvement
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
> want to do some extra work when a bucket is ready. It would be nice if we can 
> support {{BucketReady}} functionality for user. For example, One bucket is 
> created for every 5 minutes, and then at the end of 5 minutes before creating 
> the next bucket, the user needs to do something for the previous bucket 
> ready, such as sending the timestamp of the bucket ready time to a server or 
> some other stuff.
> Here, Bucket ready means all the part files name suffix under a bucket status 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready notification mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Description: 
Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
want to do some extra work when a bucket is ready. It would be nice if we can 
support {{BucketReady}} functionality for user. For example, One bucket is 
created for every 5 minutes, and then at the end of 5 minutes before creating 
the next bucket, the user needs to do something for the previous bucket ready, 
such as sending the timestamp of the bucket ready time to a server or some 
other stuff. 

Here, Bucket ready means all the part files name under a bucket status neither 
{{.pending}} nor {{.in-progress}}. Then we can think this bucket is ready.

  was:Currently, BucketingSink only can {{notifyCheckpointComplete}}


> Add bucket ready notification mechanism for BucketingSink when checkpoint 
> complete
> --
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: Improvement
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only support {{notifyCheckpointComplete}}. But users 
> want to do some extra work when a bucket is ready. It would be nice if we can 
> support {{BucketReady}} functionality for user. For example, One bucket is 
> created for every 5 minutes, and then at the end of 5 minutes before creating 
> the next bucket, the user needs to do something for the previous bucket 
> ready, such as sending the timestamp of the bucket ready time to a server or 
> some other stuff. 
> Here, Bucket ready means all the part files name under a bucket status 
> neither {{.pending}} nor {{.in-progress}}. Then we can think this bucket is 
> ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready notification mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Description: Currently, BucketingSink only can {{notifyCheckpointComplete}} 
 (was: Currently, )

> Add bucket ready notification mechanism for BucketingSink when checkpoint 
> complete
> --
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: Improvement
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, BucketingSink only can {{notifyCheckpointComplete}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9609) Add bucket ready notification mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9609:

Description: Currently, 

> Add bucket ready notification mechanism for BucketingSink when checkpoint 
> complete
> --
>
> Key: FLINK-9609
> URL: https://issues.apache.org/jira/browse/FLINK-9609
> Project: Flink
>  Issue Type: Improvement
>  Components: filesystem-connector, State Backends, Checkpointing
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9609) Add bucket ready notification mechanism for BucketingSink when checkpoint complete

2018-06-18 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9609:
---

 Summary: Add bucket ready notification mechanism for BucketingSink 
when checkpoint complete
 Key: FLINK-9609
 URL: https://issues.apache.org/jira/browse/FLINK-9609
 Project: Flink
  Issue Type: Improvement
  Components: filesystem-connector, State Backends, Checkpointing
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9604) Support KafkaProtoBufTableSource

2018-06-17 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9604:

Description: 
Protocol buffers are a language-neutral, platform-neutral extensible mechanism 
for serializing structured data. And in actual production applications, 
Protocol Buffers is often used for serialization and deserialization. So, I 
would suggest add this commonly used function.


> Support KafkaProtoBufTableSource
> 
>
> Key: FLINK-9604
> URL: https://issues.apache.org/jira/browse/FLINK-9604
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Protocol buffers are a language-neutral, platform-neutral extensible 
> mechanism for serializing structured data. And in actual production 
> applications, Protocol Buffers is often used for serialization and 
> deserialization. So, I would suggest add this commonly used function.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9607) Support ParquetTableSink

2018-06-17 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9607:
---

Assignee: (was: mingleizhang)

> Support ParquetTableSink
> 
>
> Key: FLINK-9607
> URL: https://issues.apache.org/jira/browse/FLINK-9607
> Project: Flink
>  Issue Type: New Feature
>Reporter: mingleizhang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9606) Support ParquetTableSource

2018-06-17 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9606:

Component/s: Table API & SQL

> Support ParquetTableSource
> --
>
> Key: FLINK-9606
> URL: https://issues.apache.org/jira/browse/FLINK-9606
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: mingleizhang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9606) Support ParquetTableSource

2018-06-17 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9606:
---

Assignee: (was: mingleizhang)

> Support ParquetTableSource
> --
>
> Key: FLINK-9606
> URL: https://issues.apache.org/jira/browse/FLINK-9606
> Project: Flink
>  Issue Type: New Feature
>Reporter: mingleizhang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9607) Support ParquetTableSink

2018-06-17 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9607:
---

 Summary: Support ParquetTableSink
 Key: FLINK-9607
 URL: https://issues.apache.org/jira/browse/FLINK-9607
 Project: Flink
  Issue Type: New Feature
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9606) Support ParquetTableSource

2018-06-17 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9606:
---

 Summary: Support ParquetTableSource
 Key: FLINK-9606
 URL: https://issues.apache.org/jira/browse/FLINK-9606
 Project: Flink
  Issue Type: New Feature
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9605) Support KafkaProtoBufTableSink

2018-06-17 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9605:
---

 Summary: Support KafkaProtoBufTableSink
 Key: FLINK-9605
 URL: https://issues.apache.org/jira/browse/FLINK-9605
 Project: Flink
  Issue Type: New Feature
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9604) Support KafkaProtoBufTableSource

2018-06-17 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9604:
---

 Summary: Support KafkaProtoBufTableSource
 Key: FLINK-9604
 URL: https://issues.apache.org/jira/browse/FLINK-9604
 Project: Flink
  Issue Type: New Feature
  Components: Table API  SQL
Reporter: mingleizhang
Assignee: mingleizhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9236) Use Apache Parent POM 19

2018-06-11 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9236:
---

Assignee: Stephen Jason  (was: mingleizhang)

> Use Apache Parent POM 19
> 
>
> Key: FLINK-9236
> URL: https://issues.apache.org/jira/browse/FLINK-9236
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: Stephen Jason
>Priority: Major
>
> Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out.
> This will also fix Javadoc generation with JDK 10+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9537) JobManager isolation in session mode

2018-06-09 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9537:
---

Assignee: (was: mingleizhang)

> JobManager isolation in session mode
> 
>
> Key: FLINK-9537
> URL: https://issues.apache.org/jira/browse/FLINK-9537
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Priority: Major
>
> Currently, all {{JobManagers}} are executed in the same process which also 
> runs the {{Dispatcher}} component when using the session mode. This is 
> problematic, since the {{JobManager}} also executes user code. Consequently, 
> a bug in a single Flink job can cause the failure of the other 
> {{JobManagers}} running in the same process. In order to avoid this we should 
> add the functionality to run each {{JobManager}} in its own process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9537) JobManager isolation in session mode

2018-06-08 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9537:
---

Assignee: mingleizhang

> JobManager isolation in session mode
> 
>
> Key: FLINK-9537
> URL: https://issues.apache.org/jira/browse/FLINK-9537
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Major
>
> Currently, all {{JobManagers}} are executed in the same process which also 
> runs the {{Dispatcher}} component when using the session mode. This is 
> problematic, since the {{JobManager}} also executes user code. Consequently, 
> a bug in a single Flink job can cause the failure of the other 
> {{JobManagers}} running in the same process. In order to avoid this we should 
> add the functionality to run each {{JobManager}} in its own process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8886) Job isolation via scheduling in shared cluster

2018-06-06 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-8886:

Description: 
Flink's TaskManager executes tasks from different jobs within the same JVM as 
threads.  We prefer to isolate different jobs on their own JVM.  Thus, we must 
use different TMs for different jobs.  As currently the scheduler will allocate 
task slots within a TM to tasks from different jobs, that means we must stand 
up one cluster per job.  This is wasteful, as it requires at least two 
JobManagers per cluster for high-availability, and the JMs have low utilization.

Additionally, different jobs may require different resources.  Some jobs are 
compute heavy.  Some are IO heavy (lots of state in RocksDB).  At the moment 
the scheduler threats all TMs are equivalent, except possibly in their number 
of available task slots.  Thus, one is required to stand up multiple cluster if 
there is a need for different types of TMs.

It would be useful if one could specify requirements on job, such that they are 
only scheduled on a subset of TMs.  Properly configured, that would permit 
isolation of jobs in a shared cluster and scheduling of jobs with specific 
resource needs.

One possible implementation is to specify a set of tags on the TM config file 
which the TMs used when registering with the JM, and another set of tags 
configured within the job or supplied when submitting the job.  The scheduler 
could then match the tags in the job with the tags in the TMs.  In a 
restrictive mode the scheduler would assign a job task to a TM only if all tags 
match.  In a relaxed mode the scheduler could assign a job task to a TM if 
there is a partial match, while giving preference to a more accurate match.

  was:
Flink's TaskManager executes tasks from different jobs within the same JMV as 
threads.  We prefer to isolate different jobs on their on JVM.  Thus, we must 
use different TMs for different jobs.  As currently the scheduler will allocate 
task slots within a TM to tasks from different jobs, that means we must stand 
up one cluster per job.  This is wasteful, as it requires at least two 
JobManagers per cluster for high-availability, and the JMs have low utilization.

Additionally, different jobs may require different resources.  Some jobs are 
compute heavy.  Some are IO heavy (lots of state in RocksDB).  At the moment 
the scheduler threats all TMs are equivalent, except possibly in their number 
of available task slots.  Thus, one is required to stand up multiple cluster if 
there is a need for different types of TMs.

 

It would be useful if one could specify requirements on job, such that they are 
only scheduled on a subset of TMs.  Properly configured, that would permit 
isolation of jobs in a shared cluster and scheduling of jobs with specific 
resource needs.

 

One possible implementation is to specify a set of tags on the TM config file 
which the TMs used when registering with the JM, and another set of tags 
configured within the job or supplied when submitting the job.  The scheduler 
could then match the tags in the job with the tags in the TMs.  In a 
restrictive mode the scheduler would assign a job task to a TM only if all tags 
match.  In a relaxed mode the scheduler could assign a job task to a TM if 
there is a partial match, while giving preference to a more accurate match.

 

 

 

 


> Job isolation via scheduling in shared cluster
> --
>
> Key: FLINK-8886
> URL: https://issues.apache.org/jira/browse/FLINK-8886
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination, Local Runtime, Scheduler
>Affects Versions: 1.5.0
>Reporter: Elias Levy
>Priority: Major
>
> Flink's TaskManager executes tasks from different jobs within the same JVM as 
> threads.  We prefer to isolate different jobs on their own JVM.  Thus, we 
> must use different TMs for different jobs.  As currently the scheduler will 
> allocate task slots within a TM to tasks from different jobs, that means we 
> must stand up one cluster per job.  This is wasteful, as it requires at least 
> two JobManagers per cluster for high-availability, and the JMs have low 
> utilization.
> Additionally, different jobs may require different resources.  Some jobs are 
> compute heavy.  Some are IO heavy (lots of state in RocksDB).  At the moment 
> the scheduler threats all TMs are equivalent, except possibly in their number 
> of available task slots.  Thus, one is required to stand up multiple cluster 
> if there is a need for different types of TMs.
> It would be useful if one could specify requirements on job, such that they 
> are only scheduled on a subset of TMs.  Properly configured, that would 
> permit isolation of jobs in a shared cluster and scheduling of jobs 

[jira] [Assigned] (FLINK-9413) Tasks can fail with PartitionNotFoundException if consumer deployment takes too long

2018-05-30 Thread mingleizhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9413:
---

Assignee: mingleizhang

> Tasks can fail with PartitionNotFoundException if consumer deployment takes 
> too long
> 
>
> Key: FLINK-9413
> URL: https://issues.apache.org/jira/browse/FLINK-9413
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.4.0, 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
>
> {{Tasks}} can fail with a {{PartitionNotFoundException}} if the deployment of 
> the producer takes too long. More specifically, if it takes longer than the 
> {{taskmanager.network.request-backoff.max}}, then the {{Task}} will give up 
> and fail.
> The problem is that we calculate the {{InputGateDeploymentDescriptor}} for a 
> consuming task once the producer has been assigned a slot but we do not wait 
> until it is actually running. The problem should be fixed if we wait until 
> the task is in state {{RUNNING}} before assigning the result partition to the 
> consumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9410) Replace NMClient with NMClientAsync in YarnResourceManager

2018-05-28 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492418#comment-16492418
 ] 

mingleizhang commented on FLINK-9410:
-

Yes. [~sihuazhou] I would like to take over this. And thank you for your GREAT 
generosity! Thank you very much!

> Replace NMClient with NMClientAsync in YarnResourceManager
> --
>
> Key: FLINK-9410
> URL: https://issues.apache.org/jira/browse/FLINK-9410
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.6.0
>
>
> Currently, the {{YarnResourceManager}} uses the synchronous {{NMClient}} 
> which is called from within the main thread of the {{ResourceManager}}. Since 
> these operations are blocking, we should replace the client with the 
> {{NMClientAsync}} and make the calls non blocking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-05-27 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491990#comment-16491990
 ] 

mingleizhang commented on FLINK-9185:
-

It seems we can not find Stephen Jason/Jeson in naming list.

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9429) Quickstart E2E not working locally

2018-05-24 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490142#comment-16490142
 ] 

mingleizhang commented on FLINK-9429:
-

I will give a PR to fix this soon.

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
>  Labels: test-stability
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9429) Quickstart E2E not working locally

2018-05-23 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488314#comment-16488314
 ] 

mingleizhang commented on FLINK-9429:
-

Hi, [~till.rohrmann]. Have you build flink again ? I tested on my local 
machine. Looks good. Below are the result .

{code:java}
Success: There are no flink core classes are contained in the jar.
Success: Elasticsearch5SinkExample.class and other user classes are included in 
the jar.
Downloading Elasticsearch from 
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.2.tar.gz 
...
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100 31.7M  100 31.7M0 0   234k  0  0:02:18  0:02:18 --:--:--  280k
Elasticsearch node is running.
Starting cluster.
Starting standalonesession daemon on host zhangmingleideMacBook-Pro.local.
[2018-05-24T10:10:42,017][INFO ][o.e.n.Node   ] [] initializing ...
[2018-05-24T10:10:42,166][INFO ][o.e.e.NodeEnvironment] [tmBMe7q] using [1] 
data paths, mounts [[/ (/dev/disk1)]], net usable_space [286.6gb], net 
total_space [464.6gb], spins? [unknown], types [hfs]
[2018-05-24T10:10:42,167][INFO ][o.e.e.NodeEnvironment] [tmBMe7q] heap size 
[1.9gb], compressed ordinary object pointers [true]
[2018-05-24T10:10:42,171][INFO ][o.e.n.Node   ] node name [tmBMe7q] 
derived from node ID [tmBMe7q8QHWuji0qmmKnAw]; set [node.name] to override
[2018-05-24T10:10:42,174][INFO ][o.e.n.Node   ] version[5.1.2], 
pid[8378], build[c8c4c16/2017-01-11T20:18:39.146Z], OS[Mac OS 
X/10.12.3/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server 
VM/1.8.0_161/25.161-b12]
Starting taskexecutor daemon on host zhangmingleideMacBook-Pro.local.
Waiting for dispatcher REST endpoint to come up...
Waiting for dispatcher REST endpoint to come up...
[2018-05-24T10:10:43,915][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [aggs-matrix-stats]
[2018-05-24T10:10:43,915][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [ingest-common]
[2018-05-24T10:10:43,915][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [lang-expression]
[2018-05-24T10:10:43,915][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [lang-groovy]
[2018-05-24T10:10:43,915][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [lang-mustache]
[2018-05-24T10:10:43,915][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [lang-painless]
[2018-05-24T10:10:43,916][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [percolator]
[2018-05-24T10:10:43,916][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [reindex]
[2018-05-24T10:10:43,916][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [transport-netty3]
[2018-05-24T10:10:43,916][INFO ][o.e.p.PluginsService ] [tmBMe7q] loaded 
module [transport-netty4]
[2018-05-24T10:10:43,917][INFO ][o.e.p.PluginsService ] [tmBMe7q] no 
plugins loaded
Waiting for dispatcher REST endpoint to come up...
Waiting for dispatcher REST endpoint to come up...
Waiting for dispatcher REST endpoint to come up...
Waiting for dispatcher REST endpoint to come up...
[2018-05-24T10:10:49,112][INFO ][o.e.n.Node   ] initialized
[2018-05-24T10:10:49,113][INFO ][o.e.n.Node   ] [tmBMe7q] starting 
...
Dispatcher REST endpoint is up.
[2018-05-24T10:10:49,716][INFO ][o.e.t.TransportService   ] [tmBMe7q] 
publish_address {127.0.0.1:9300}, bound_addresses {[fe80::1]:9300}, 
{[::1]:9300}, {127.0.0.1:9300}
Starting execution of program
[2018-05-24T10:10:52,873][INFO ][o.e.c.s.ClusterService   ] [tmBMe7q] 
new_master 
{tmBMe7q}{tmBMe7q8QHWuji0qmmKnAw}{bo3RSiiLQuGOwWNXFx6QMA}{127.0.0.1}{127.0.0.1:9300},
 reason: zen-disco-elected-as-master ([0] nodes joined)
[2018-05-24T10:10:52,899][INFO ][o.e.h.HttpServer ] [tmBMe7q] 
publish_address {127.0.0.1:9200}, bound_addresses {[fe80::1]:9200}, 
{[::1]:9200}, {127.0.0.1:9200}
[2018-05-24T10:10:52,899][INFO ][o.e.n.Node   ] [tmBMe7q] started
[2018-05-24T10:10:52,904][INFO ][o.e.g.GatewayService ] [tmBMe7q] recovered 
[0] indices into cluster_state
[2018-05-24T10:10:53,616][INFO ][o.e.c.m.MetaDataCreateIndexService] [tmBMe7q] 
[index] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], 
mappings []
[2018-05-24T10:10:53,917][INFO ][o.e.c.m.MetaDataMappingService] [tmBMe7q] 
[index/84rcQ1-OQ6-pUE6djxuxTw] create_mapping [type]
Program execution finished
Job with JobID 574e9c30fd1d86ce6848cd72c103cb55 has finished.
Job Runtime: 2484 ms
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   194  100   1940 0958  0 --:--:-- --:--:-- --:--:--   960
Waiting for Elasticsearch records ...
  % Total% Received % Xferd  Average Speed   

[jira] [Assigned] (FLINK-9429) Quickstart E2E not working locally

2018-05-23 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9429:
---

Assignee: mingleizhang

> Quickstart E2E not working locally
> --
>
> Key: FLINK-9429
> URL: https://issues.apache.org/jira/browse/FLINK-9429
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Critical
>  Labels: test-stability
>
> The quickstart e2e test is not working locally. It seems as if the job does 
> not produce anything into Elasticsearch. Furthermore, the test does not 
> terminate with control-C.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9411) Support parquet rolling sink writer

2018-05-23 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487436#comment-16487436
 ] 

mingleizhang commented on FLINK-9411:
-

Thanks for your feedback, [~StephanEwen]. What you mean is that every time, 
when bucketing sink do a checkpoint/flush() , since require writing very larger 
batches and that would cause performance issue ? Other than that, we can block 
this issue here until fixes the bucking sink shortcomings. 

> Support parquet rolling sink writer
> ---
>
> Key: FLINK-9411
> URL: https://issues.apache.org/jira/browse/FLINK-9411
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector
>Reporter: mingleizhang
>Assignee: Triones Deng
>Priority: Major
>
> Like support orc rolling sink writer in FLINK-9407 , we should also support 
> parquet rolling sink writer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9407) Support orc rolling sink writer

2018-05-23 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9407:

Description: Currently, we only support {{StringWriter}}, 
{{SequenceFileWriter}} and {{AvroKeyValueSinkWriter}}. I would suggest add an 
orc writer for rolling sink.  (was: Currently, we only support 
{{StringWriter}}, {{SequenceFileWriter}} and {{AvroKeyValueSinkWriter}}. I 
would suggest add orc writer for rolling sink.)

> Support orc rolling sink writer
> ---
>
> Key: FLINK-9407
> URL: https://issues.apache.org/jira/browse/FLINK-9407
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> Currently, we only support {{StringWriter}}, {{SequenceFileWriter}} and 
> {{AvroKeyValueSinkWriter}}. I would suggest add an orc writer for rolling 
> sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9411) Support parquet rolling sink writer

2018-05-22 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9411:

Component/s: filesystem-connector

> Support parquet rolling sink writer
> ---
>
> Key: FLINK-9411
> URL: https://issues.apache.org/jira/browse/FLINK-9411
> Project: Flink
>  Issue Type: New Feature
>  Components: filesystem-connector
>Reporter: mingleizhang
>Assignee: Triones Deng
>Priority: Major
>
> Like support orc rolling sink writer in FLINK-9407 , we should also support 
> parquet rolling sink writer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9411) Support parquet rolling sink writer

2018-05-22 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9411:
---

 Summary: Support parquet rolling sink writer
 Key: FLINK-9411
 URL: https://issues.apache.org/jira/browse/FLINK-9411
 Project: Flink
  Issue Type: New Feature
Reporter: mingleizhang
Assignee: Triones Deng


Like support orc rolling sink writer in FLINK-9407 , we should also support 
parquet rolling sink writer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9407) Support orc rolling sink writer

2018-05-22 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9407:
---

 Summary: Support orc rolling sink writer
 Key: FLINK-9407
 URL: https://issues.apache.org/jira/browse/FLINK-9407
 Project: Flink
  Issue Type: New Feature
  Components: filesystem-connector
Reporter: mingleizhang
Assignee: mingleizhang


Currently, we only support {{StringWriter}}, {{SequenceFileWriter}} and 
{{AvroKeyValueSinkWriter}}. I would suggest add orc writer for rolling sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-9008) End-to-end test: Quickstarts

2018-05-14 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9008:

Comment: was deleted

(was: Hi, [~till.rohrmann] I would like to confirm one stuff with you here for 
the example of a flink application program that will package into a jar file. I 
know there is already an example of {{PopularPlacesToES}}, should I package 
this into a jar or instead I can write a simple job which  support only one 
operator like {{filter}} into that jar. Any suggestions ? Thank you very much.)

> End-to-end test: Quickstarts
> 
>
> Key: FLINK-9008
> URL: https://issues.apache.org/jira/browse/FLINK-9008
> Project: Flink
>  Issue Type: Sub-task
>  Components: Quickstarts, Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Blocker
> Fix For: 1.6.0
>
>
> We could add an end-to-end test which verifies Flink's quickstarts. It should 
> do the following:
> # create a new Flink project using the quickstarts archetype 
> # add a new Flink dependency to the {{pom.xml}} (e.g. Flink connector or 
> library) 
> # run {{mvn clean package -Pbuild-jar}}
> # verify that no core dependencies are contained in the jar file
> # Run the program



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8993) Add a test operator with keyed state that uses Kryo serializer (registered/unregistered/custom)

2018-05-09 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-8993:
---

Assignee: (was: mingleizhang)

> Add a test operator with keyed state that uses Kryo serializer 
> (registered/unregistered/custom)
> ---
>
> Key: FLINK-8993
> URL: https://issues.apache.org/jira/browse/FLINK-8993
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Priority: Major
>
> Add an operator with keyed state that uses Kryo serializer 
> (registered/unregistered/custom).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8999) Ensure the job has an operator with operator state.

2018-05-07 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-8999:
---

Assignee: (was: mingleizhang)

> Ensure the job has an operator with operator state.
> ---
>
> Key: FLINK-8999
> URL: https://issues.apache.org/jira/browse/FLINK-8999
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9284) Update CLI page

2018-05-03 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9284:
---

Assignee: Triones Deng

> Update CLI page
> ---
>
> Key: FLINK-9284
> URL: https://issues.apache.org/jira/browse/FLINK-9284
> Project: Flink
>  Issue Type: Improvement
>  Components: Client, Documentation
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Triones Deng
>Priority: Critical
> Fix For: 1.5.0
>
>
> The [CLI|https://ci.apache.org/projects/flink/flink-docs-master/ops/cli.html] 
> page must be updated for 1.5.
> The 
> [examples|https://ci.apache.org/projects/flink/flink-docs-master/ops/cli.html#examples]
>  using the {{-m}} option must be updated to use {{8081}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9259) The implementation of the SourceFunction is not serializable.

2018-04-25 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453379#comment-16453379
 ] 

mingleizhang commented on FLINK-9259:
-

I would not think it is a bug since {{SourceFunction}} has implements 
{{Serializable}} interface. From your error, I think you implement your own 
{{SourceFunction}}  which contains fields that does not serializable.

> The implementation of the SourceFunction is not serializable. 
> --
>
> Key: FLINK-9259
> URL: https://issues.apache.org/jira/browse/FLINK-9259
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Bob Lau
>Priority: Major
>
> The exception stack is as follows:
> {code:java}
> //Code placeholder
> org.apache.flink.api.common.InvalidProgramException: The implementation of 
> the SourceFunction is not serializable. The object probably contains or 
> references non serializable fields.
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:99)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1560)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1472)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1416)
> at 
> com..tysc.job.source.RowDataStreamSpecifyTableSource.getDataStream(RowDataStreamSpecifyTableSource.java:40)
> at 
> org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.translateToPlan(StreamTableSourceScan.scala:95)
> at 
> org.apache.flink.table.plan.nodes.datastream.DataStreamJoin.translateToPlan(DataStreamJoin.scala:135)
> at 
> org.apache.flink.table.plan.nodes.datastream.DataStreamCalc.translateToPlan(DataStreamCalc.scala:97)
> at 
> org.apache.flink.table.plan.nodes.datastream.DataStreamJoin.translateToPlan(DataStreamJoin.scala:135)
> at 
> org.apache.flink.table.plan.nodes.datastream.DataStreamCalc.translateToPlan(DataStreamCalc.scala:97)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.translateToCRow(StreamTableEnvironment.scala:885)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:812)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:782)
> at 
> org.apache.flink.table.api.java.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:308)
> at 
> org.apache.flink.table.api.java.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:262)
> at 
> com..tysc.job.service.SubmitJobService.submitJobToLocal(SubmitJobService.java:338)
> at com..tysc.rest.JobSubmitController$3.run(JobSubmitController.java:114)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.NotSerializableException: 
> org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
> at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
> at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
> at 
> java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
> at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
> at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
> at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:447)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:81)
> ... 21 more
> {code}
> I've implement the serializable interface in the implementation of the 
> SourceFunction.
> The code is as follows:
>  
> {code:java}
> //Code placeholder
> @Override
> public void run(SourceContext ctx)
> throws Exception {
> stream.map(new MapFunction(){
> private static final long serialVersionUID = -1723722950731109198L;
> @Override
> public Row map(Row input) throws Exception {
> ctx.collect(input);
> return null;
> }
> });
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9259) The implementation of the SourceFunction is not serializable.

2018-04-25 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9259:

Description: 
The exception stack is as follows:
{code:java}
//Code placeholder
org.apache.flink.api.common.InvalidProgramException: The implementation of the 
SourceFunction is not serializable. The object probably contains or references 
non serializable fields.

at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:99)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1560)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1472)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1416)

at 
com..tysc.job.source.RowDataStreamSpecifyTableSource.getDataStream(RowDataStreamSpecifyTableSource.java:40)

at 
org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.translateToPlan(StreamTableSourceScan.scala:95)

at 
org.apache.flink.table.plan.nodes.datastream.DataStreamJoin.translateToPlan(DataStreamJoin.scala:135)

at 
org.apache.flink.table.plan.nodes.datastream.DataStreamCalc.translateToPlan(DataStreamCalc.scala:97)

at 
org.apache.flink.table.plan.nodes.datastream.DataStreamJoin.translateToPlan(DataStreamJoin.scala:135)

at 
org.apache.flink.table.plan.nodes.datastream.DataStreamCalc.translateToPlan(DataStreamCalc.scala:97)

at 
org.apache.flink.table.api.StreamTableEnvironment.translateToCRow(StreamTableEnvironment.scala:885)

at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:812)

at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:782)

at 
org.apache.flink.table.api.java.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:308)

at 
org.apache.flink.table.api.java.StreamTableEnvironment.toAppendStream(StreamTableEnvironment.scala:262)

at 
com..tysc.job.service.SubmitJobService.submitJobToLocal(SubmitJobService.java:338)

at com..tysc.rest.JobSubmitController$3.run(JobSubmitController.java:114)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.io.NotSerializableException: 
org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator

at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)

at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)

at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)

at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)

at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)

at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)

at 
org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:447)

at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:81)

... 21 more
{code}
I've implement the serializable interface in the implementation of the 
SourceFunction.

The code is as follows:

 
{code:java}
//Code placeholder
@Override

public void run(SourceContext ctx)

throws Exception {

stream.map(new MapFunction(){

private static final long serialVersionUID = -1723722950731109198L;



@Override

public Row map(Row input) throws Exception {

ctx.collect(input);

return null;

}

});

}
{code}

  was:
The exception stack is as follows:
{code:java}
//代码占位符
org.apache.flink.api.common.InvalidProgramException: The implementation of the 
SourceFunction is not serializable. The object probably contains or references 
non serializable fields.

at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:99)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1560)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1472)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:1416)

at 
com..tysc.job.source.RowDataStreamSpecifyTableSource.getDataStream(RowDataStreamSpecifyTableSource.java:40)

at 
org.apache.flink.table.plan.nodes.datastream.StreamTableSourceScan.translateToPlan(StreamTableSourceScan.scala:95)

at 
org.apache.flink.table.plan.nodes.datastream.DataStreamJoin.translateToPlan(DataStreamJoin.scala:135)

at 
org.apache.flink.table.plan.nodes.datastream.DataStreamCalc.translateToPlan(DataStreamCalc.scala:97)

at 

[jira] [Updated] (FLINK-9231) Enable SO_REUSEADDR on listen sockets

2018-04-24 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9231:

Description: This allows sockets to be bound even if there are sockets from 
a previous application that are still pending closure.  (was: This allows 
sockets to be bound even if there are sockets
from a previous application that are still pending closure.)

> Enable SO_REUSEADDR on listen sockets
> -
>
> Key: FLINK-9231
> URL: https://issues.apache.org/jira/browse/FLINK-9231
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Triones Deng
>Priority: Major
>
> This allows sockets to be bound even if there are sockets from a previous 
> application that are still pending closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9231) Enable SO_REUSEADDR on listen sockets

2018-04-24 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450111#comment-16450111
 ] 

mingleizhang edited comment on FLINK-9231 at 4/24/18 3:51 PM:
--

Hi, [~triones] I think you can push a PR first. Would not suggest to push a 
bunch of code here.  PR can show your idea in a easier way, if the PR not 
suitable for the issue, then people would give you some suggestions. then, 
After a few iterations with the PR. Patch then available to merge. Good luck ~


was (Author: mingleizhang):
Hi, [~triones] I think you can push a PR first. Would not suggest to push a 
bunch of code here. 

> Enable SO_REUSEADDR on listen sockets
> -
>
> Key: FLINK-9231
> URL: https://issues.apache.org/jira/browse/FLINK-9231
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Triones Deng
>Priority: Major
>
> This allows sockets to be bound even if there are sockets
> from a previous application that are still pending closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9231) Enable SO_REUSEADDR on listen sockets

2018-04-24 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450111#comment-16450111
 ] 

mingleizhang commented on FLINK-9231:
-

Hi, [~triones] I think you can push a PR first. Would not suggest to push a 
bunch of code here. 

> Enable SO_REUSEADDR on listen sockets
> -
>
> Key: FLINK-9231
> URL: https://issues.apache.org/jira/browse/FLINK-9231
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Triones Deng
>Priority: Major
>
> This allows sockets to be bound even if there are sockets
> from a previous application that are still pending closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9231) Enable SO_REUSEADDR on listen sockets

2018-04-24 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449911#comment-16449911
 ] 

mingleizhang commented on FLINK-9231:
-

Come on [~triones]! :D

> Enable SO_REUSEADDR on listen sockets
> -
>
> Key: FLINK-9231
> URL: https://issues.apache.org/jira/browse/FLINK-9231
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Triones Deng
>Priority: Major
>
> This allows sockets to be bound even if there are sockets
> from a previous application that are still pending closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9239) Reintroduce example program in quickstarts

2018-04-23 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448027#comment-16448027
 ] 

mingleizhang commented on FLINK-9239:
-

After thinking a little while, Only for you said : Such example programs are 
very useful in order to verify an IDE or local cluster setup. I still would 
think a WordCount is enough,  It is a maybe that a stateful application is a 
little hard for new user to understand as people do not know what even a 
stateful is if people do not have a streaming programming experience before.

> Reintroduce example program in quickstarts
> --
>
> Key: FLINK-9239
> URL: https://issues.apache.org/jira/browse/FLINK-9239
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Major
>
> FLINK-8761 removed all example programs from the quickstarts. I would propose 
> to reintroduce at least one tiny example that is runnable out of the box with 
> no parameters. Right now users are facing an exception that no sinks are 
> defined when running one of the program skeleton. Such example programs are 
> very useful in order to verify an IDE or local cluster setup. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9239) Reintroduce example program in quickstarts

2018-04-23 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16448018#comment-16448018
 ] 

mingleizhang commented on FLINK-9239:
-

Hi, [~twalthr] It is a good idea. How about using an operator state to record 
counts in a count application ?

> Reintroduce example program in quickstarts
> --
>
> Key: FLINK-9239
> URL: https://issues.apache.org/jira/browse/FLINK-9239
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Major
>
> FLINK-8761 removed all example programs from the quickstarts. I would propose 
> to reintroduce at least one tiny example that is runnable out of the box with 
> no parameters. Right now users are facing an exception that no sinks are 
> defined when running one of the program skeleton. Such example programs are 
> very useful in order to verify an IDE or local cluster setup. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9239) Reintroduce example program in quickstarts

2018-04-23 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447998#comment-16447998
 ] 

mingleizhang commented on FLINK-9239:
-

I would choose a {{WordCount}} as an example for new user if there is no 
question.

> Reintroduce example program in quickstarts
> --
>
> Key: FLINK-9239
> URL: https://issues.apache.org/jira/browse/FLINK-9239
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Major
>
> FLINK-8761 removed all example programs from the quickstarts. I would propose 
> to reintroduce at least one tiny example that is runnable out of the box with 
> no parameters. Right now users are facing an exception that no sinks are 
> defined when running one of the program skeleton. Such example programs are 
> very useful in order to verify an IDE or local cluster setup. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9239) Reintroduce example program in quickstarts

2018-04-23 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9239:
---

Assignee: mingleizhang

> Reintroduce example program in quickstarts
> --
>
> Key: FLINK-9239
> URL: https://issues.apache.org/jira/browse/FLINK-9239
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Major
>
> FLINK-8761 removed all example programs from the quickstarts. I would propose 
> to reintroduce at least one tiny example that is runnable out of the box with 
> no parameters. Right now users are facing an exception that no sinks are 
> defined when running one of the program skeleton. Such example programs are 
> very useful in order to verify an IDE or local cluster setup. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-22 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9185:
---

Assignee: (was: mingleizhang)

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-22 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447543#comment-16447543
 ] 

mingleizhang commented on FLINK-9185:
-

Hi, [~Stephen Jason], I unassigned now.

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9236) Use Apache Parent POM 19

2018-04-22 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9236:
---

Assignee: mingleizhang

> Use Apache Parent POM 19
> 
>
> Key: FLINK-9236
> URL: https://issues.apache.org/jira/browse/FLINK-9236
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Major
>
> Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out. This 
> will also fix Javadoc generation with JDK 10+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-8335) Upgrade hbase connector dependency to 1.4.3

2018-04-22 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang resolved FLINK-8335.
-
Resolution: Fixed

> Upgrade hbase connector dependency to 1.4.3
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
> Fix For: 1.6.0
>
>
> hbase 1.4.3 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8999) Ensure the job has an operator with operator state.

2018-04-21 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-8999:
---

Assignee: mingleizhang

> Ensure the job has an operator with operator state.
> ---
>
> Key: FLINK-8999
> URL: https://issues.apache.org/jira/browse/FLINK-8999
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: mingleizhang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9193) Deprecate non-well-defined output methods on DataStream

2018-04-21 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447058#comment-16447058
 ] 

mingleizhang commented on FLINK-9193:
-

Do we plan to support write to ORC or Parquet format  in future ?

> Deprecate non-well-defined output methods on DataStream
> ---
>
> Key: FLINK-9193
> URL: https://issues.apache.org/jira/browse/FLINK-9193
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.5.0
>
>
> Some output methods on {{DataStream}} that write text to files are not safe 
> to use in a streaming program as they have no consistency guarantees. They 
> are:
>  - {{writeAsText()}}
>  - {{writeAsCsv()}}
>  - {{writeToSocket()}}
>  - {{writeUsingOutputFormat()}}
> Along with those we should also deprecate the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9230) WebFrontendITCase.testStopYarn is unstable

2018-04-20 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9230:

Description: 
https://api.travis-ci.org/v3/job/369380167/log.txt
{code:java}
Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
elapsed: 1.365 sec  <<< FAILURE!
java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)

Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - in 
org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase

Results :

Failed tests: 
  WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
Found>

Tests run: 14, Failures: 1, Errors: 0, Skipped: 0

02:17:10.224 [INFO] 

02:17:10.224 [INFO] Reactor Summary:
02:17:10.224 [INFO] 
02:17:10.224 [INFO] flink-core . 
SUCCESS [ 58.759 s]
02:17:10.224 [INFO] flink-java . 
SUCCESS [ 30.613 s]
02:17:10.224 [INFO] flink-runtime .. 
SUCCESS [22:26 min]
02:17:10.224 [INFO] flink-optimizer  
SUCCESS [  5.508 s]
02:17:10.224 [INFO] flink-clients .. 
SUCCESS [ 12.354 s]
02:17:10.224 [INFO] flink-streaming-java ... 
SUCCESS [02:53 min]
02:17:10.224 [INFO] flink-scala  
SUCCESS [ 18.968 s]
02:17:10.225 [INFO] flink-test-utils ... 
SUCCESS [  4.826 s]
02:17:10.225 [INFO] flink-statebackend-rocksdb . 
SUCCESS [ 13.644 s]
02:17:10.225 [INFO] flink-runtime-web .. 
FAILURE [ 15.095 s]
02:17:10.225 [INFO] flink-streaming-scala .. SKIPPED
02:17:10.225 [INFO] flink-scala-shell .. SKIPPED
02:17:10.225 [INFO] 

02:17:10.225 [INFO] BUILD FAILURE
02:17:10.225 [INFO] 

02:17:10.225 [INFO] Total time: 28:03 min
02:17:10.225 [INFO] Finished at: 2018-04-21T02:17:10+00:00
02:17:10.837 [INFO] Final Memory: 87M/711M
02:17:10.837 [INFO] 

{code}

  was:

{code:java}
Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
elapsed: 1.365 sec  <<< FAILURE!
java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)

Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - in 
org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase

Results :

Failed tests: 
  WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
Found>

Tests run: 14, Failures: 1, Errors: 0, Skipped: 0

02:17:10.224 [INFO] 

02:17:10.224 [INFO] Reactor Summary:
02:17:10.224 [INFO] 
02:17:10.224 [INFO] flink-core . 
SUCCESS [ 58.759 s]
02:17:10.224 [INFO] flink-java . 
SUCCESS [ 30.613 s]
02:17:10.224 [INFO] flink-runtime .. 
SUCCESS [22:26 min]
02:17:10.224 [INFO] flink-optimizer  
SUCCESS [  5.508 s]
02:17:10.224 [INFO] flink-clients .. 
SUCCESS [ 12.354 s]
02:17:10.224 [INFO] flink-streaming-java ... 
SUCCESS [02:53 min]
02:17:10.224 [INFO] flink-scala  

[jira] [Created] (FLINK-9230) WebFrontendITCase.testStopYarn is unstable

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9230:
---

 Summary: WebFrontendITCase.testStopYarn is unstable
 Key: FLINK-9230
 URL: https://issues.apache.org/jira/browse/FLINK-9230
 Project: Flink
  Issue Type: Improvement
  Components: Tests
 Environment: The latest commit : 
7d0bfd51e967eb1eb2c2869d79cb7cd13b8366dd on master branch.
Reporter: mingleizhang



{code:java}
Running org.apache.flink.runtime.webmonitor.WebFrontendITCase
Running org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.348 sec <<< 
FAILURE! - in org.apache.flink.runtime.webmonitor.WebFrontendITCase
testStopYarn(org.apache.flink.runtime.webmonitor.WebFrontendITCase)  Time 
elapsed: 1.365 sec  <<< FAILURE!
java.lang.AssertionError: expected:<202 Accepted> but was:<404 Not Found>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.flink.runtime.webmonitor.WebFrontendITCase.testStopYarn(WebFrontendITCase.java:359)

Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.805 sec - in 
org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase

Results :

Failed tests: 
  WebFrontendITCase.testStopYarn:359 expected:<202 Accepted> but was:<404 Not 
Found>

Tests run: 14, Failures: 1, Errors: 0, Skipped: 0

02:17:10.224 [INFO] 

02:17:10.224 [INFO] Reactor Summary:
02:17:10.224 [INFO] 
02:17:10.224 [INFO] flink-core . 
SUCCESS [ 58.759 s]
02:17:10.224 [INFO] flink-java . 
SUCCESS [ 30.613 s]
02:17:10.224 [INFO] flink-runtime .. 
SUCCESS [22:26 min]
02:17:10.224 [INFO] flink-optimizer  
SUCCESS [  5.508 s]
02:17:10.224 [INFO] flink-clients .. 
SUCCESS [ 12.354 s]
02:17:10.224 [INFO] flink-streaming-java ... 
SUCCESS [02:53 min]
02:17:10.224 [INFO] flink-scala  
SUCCESS [ 18.968 s]
02:17:10.225 [INFO] flink-test-utils ... 
SUCCESS [  4.826 s]
02:17:10.225 [INFO] flink-statebackend-rocksdb . 
SUCCESS [ 13.644 s]
02:17:10.225 [INFO] flink-runtime-web .. 
FAILURE [ 15.095 s]
02:17:10.225 [INFO] flink-streaming-scala .. SKIPPED
02:17:10.225 [INFO] flink-scala-shell .. SKIPPED
02:17:10.225 [INFO] 

02:17:10.225 [INFO] BUILD FAILURE
02:17:10.225 [INFO] 

02:17:10.225 [INFO] Total time: 28:03 min
02:17:10.225 [INFO] Finished at: 2018-04-21T02:17:10+00:00
02:17:10.837 [INFO] Final Memory: 87M/711M
02:17:10.837 [INFO] 

{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445640#comment-16445640
 ] 

mingleizhang commented on FLINK-9185:
-

OKay. I can give this to you. You can push a PR to here.

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9227:

Component/s: Tests

> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-9227:

Fix Version/s: 1.5.0

> Add Bucketing e2e test script to run-nightly-tests.sh
> -
>
> Key: FLINK-9227
> URL: https://issues.apache.org/jira/browse/FLINK-9227
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0
>
>
> The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
> now, the latter will be executed by manually verifying a release or making 
> sure that the tests pass.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9227) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9227:
---

 Summary: Add Bucketing e2e test script to run-nightly-tests.sh
 Key: FLINK-9227
 URL: https://issues.apache.org/jira/browse/FLINK-9227
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang


The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
now, the latter will be executed by manually verifying a release or making sure 
that the tests pass.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9226) Add Bucketing e2e test script to run-nightly-tests.sh

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9226:
---

 Summary: Add Bucketing e2e test script to run-nightly-tests.sh
 Key: FLINK-9226
 URL: https://issues.apache.org/jira/browse/FLINK-9226
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang


The {{test_streaming_bucketing.sh}} does not add to {{run-nightly-tests.sh}} 
now, the latter will be executed by manually verifying a release or making sure 
that the tests pass.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445487#comment-16445487
 ] 

mingleizhang commented on FLINK-9224:
-

Thanks [~twalthr]. :) 

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445393#comment-16445393
 ] 

mingleizhang commented on FLINK-9224:
-

[~twalthr] Should it put to {{run-nightly-tests.sh}} ?

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445389#comment-16445389
 ] 

mingleizhang commented on FLINK-9224:
-

Hi, [~twalthr] Could you take a look on this jira ? Thank you.

> Add Bucketing e2e test script to run-pre-commit-tests.sh
> 
>
> Key: FLINK-9224
> URL: https://issues.apache.org/jira/browse/FLINK-9224
> Project: Flink
>  Issue Type: Improvement
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Major
>
> The {{test_streaming_bucketing.sh}} does not add to 
> {{run-pre-commit-tests.sh}} now, the latter will be executed by Travis that 
> would verify e2e test whether correct or incorrect. So, we should add it and 
> make Travis execute it for every git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9224) Add Bucketing e2e test script to run-pre-commit-tests.sh

2018-04-20 Thread mingleizhang (JIRA)
mingleizhang created FLINK-9224:
---

 Summary: Add Bucketing e2e test script to run-pre-commit-tests.sh
 Key: FLINK-9224
 URL: https://issues.apache.org/jira/browse/FLINK-9224
 Project: Flink
  Issue Type: Improvement
Reporter: mingleizhang
Assignee: mingleizhang


The {{test_streaming_bucketing.sh}} does not add to {{run-pre-commit-tests.sh}} 
now, the latter will be executed by Travis that would verify e2e test whether 
correct or incorrect. So, we should add it and make Travis execute it for every 
git commits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8646) Move docs directory to flink-docs

2018-04-19 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16443821#comment-16443821
 ] 

mingleizhang commented on FLINK-8646:
-

I dont know either. But I am watching 
http://flink.apache.org/contribute-documentation.html#obtain-the-documentation-sources
 how to do that. Never try before for this.

> Move docs directory to flink-docs
> -
>
> Key: FLINK-8646
> URL: https://issues.apache.org/jira/browse/FLINK-8646
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Documentation
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Minor
>
> We should look into moving the /docs directory into the flink-docs module to 
> keep all docs related files in one module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8646) Move docs directory to flink-docs

2018-04-19 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-8646:
---

Assignee: mingleizhang

> Move docs directory to flink-docs
> -
>
> Key: FLINK-8646
> URL: https://issues.apache.org/jira/browse/FLINK-8646
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System, Documentation
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Minor
>
> We should look into moving the /docs directory into the flink-docs module to 
> keep all docs related files in one module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9193) Deprecate non-well-defined output methods on DataStream

2018-04-17 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441743#comment-16441743
 ] 

mingleizhang edited comment on FLINK-9193 at 4/18/18 1:41 AM:
--

I agree with [~twalthr] at this point. It would be better adding more comments 
on those method like those method can not support consistency guarantees 
instead of deprecating them. Like in data analysis for data cleaning, we do not 
need strong consistency guarantees and many data are drop and duplicate.


was (Author: mingleizhang):
I agree with [~twalthr] at this point. It would be better adding more comments 
on those method like those method can not support consistency guarantees 
instead of deprecating them. Like in data analysis for data cleaning, we do 
need strong consistency guarantees and many data are drop and duplicate.

> Deprecate non-well-defined output methods on DataStream
> ---
>
> Key: FLINK-9193
> URL: https://issues.apache.org/jira/browse/FLINK-9193
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.5.0
>
>
> Some output methods on {{DataStream}} that write text to files are not safe 
> to use in a streaming program as they have no consistency guarantees. They 
> are:
>  - {{writeAsText()}}
>  - {{writeAsCsv()}}
>  - {{writeToSocket()}}
>  - {{writeUsingOutputFormat()}}
> Along with those we should also deprecate the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9193) Deprecate non-well-defined output methods on DataStream

2018-04-17 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441743#comment-16441743
 ] 

mingleizhang edited comment on FLINK-9193 at 4/18/18 1:40 AM:
--

I agree with [~twalthr] at this point. It would be better adding more comments 
on those method like those method can not support consistency guarantees 
instead of deprecating them. Like in data analysis for data cleaning, we do 
need strong consistency guarantees and many data are drop and duplicate.


was (Author: mingleizhang):
I agree with [~twalthr] at this point. It would be better adding more comments 
on those method like those method can not support consistency guarantees 
instead of deprecating them. Like in data analysis for data cleaning, we do 
need strong consistency guarantees.

> Deprecate non-well-defined output methods on DataStream
> ---
>
> Key: FLINK-9193
> URL: https://issues.apache.org/jira/browse/FLINK-9193
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.5.0
>
>
> Some output methods on {{DataStream}} that write text to files are not safe 
> to use in a streaming program as they have no consistency guarantees. They 
> are:
>  - {{writeAsText()}}
>  - {{writeAsCsv()}}
>  - {{writeToSocket()}}
>  - {{writeUsingOutputFormat()}}
> Along with those we should also deprecate the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9193) Deprecate non-well-defined output methods on DataStream

2018-04-17 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441743#comment-16441743
 ] 

mingleizhang commented on FLINK-9193:
-

I agree with [~twalthr] at this point. It would be better adding more comments 
on those method like those method can not support consistency guarantees 
instead of deprecating them. Like in data analysis for data cleaning, we do 
need strong consistency guarantees.

> Deprecate non-well-defined output methods on DataStream
> ---
>
> Key: FLINK-9193
> URL: https://issues.apache.org/jira/browse/FLINK-9193
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
> Fix For: 1.5.0
>
>
> Some output methods on {{DataStream}} that write text to files are not safe 
> to use in a streaming program as they have no consistency guarantees. They 
> are:
>  - {{writeAsText()}}
>  - {{writeAsCsv()}}
>  - {{writeToSocket()}}
>  - {{writeUsingOutputFormat()}}
> Along with those we should also deprecate the {{SinkFunctions}} that they use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8661) Replace Collections.EMPTY_MAP with Collections.emptyMap()

2018-04-17 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441734#comment-16441734
 ] 

mingleizhang commented on FLINK-8661:
-

Sorry for later since my friend wants to get this ticket. But I think he is too 
busy to do it. So, I will give a PR soon.

> Replace Collections.EMPTY_MAP with Collections.emptyMap()
> -
>
> Key: FLINK-8661
> URL: https://issues.apache.org/jira/browse/FLINK-8661
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> The use of Collections.EMPTY_SET and Collections.EMPTY_MAP often causes 
> unchecked assignment. It should be replaced with Collections.emptySet() and 
> Collections.emptyMap() .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8335) Upgrade hbase connector dependency to 1.4.3

2018-04-17 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-8335:

Fix Version/s: 1.6.0

> Upgrade hbase connector dependency to 1.4.3
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
> Fix For: 1.6.0
>
>
> hbase 1.4.3 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8335) Upgrade hbase connector dependency to 1.4.3

2018-04-17 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-8335:

Description: 
hbase 1.4.3 has been released.

1.4.0 shows speed improvement over previous 1.x releases.

http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available

This issue is to upgrade the dependency to 1.4.3

  was:
hbase 1.4.1 has been released.

1.4.0 shows speed improvement over previous 1.x releases.

http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available

This issue is to upgrade the dependency to 1.4.1


> Upgrade hbase connector dependency to 1.4.3
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> hbase 1.4.3 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8978) End-to-end test: Job upgrade

2018-04-17 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440834#comment-16440834
 ] 

mingleizhang commented on FLINK-8978:
-

Sounds like a HotSwap functionality. Interesting.

> End-to-end test: Job upgrade
> 
>
> Key: FLINK-8978
> URL: https://issues.apache.org/jira/browse/FLINK-8978
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.6.0, 1.5.1
>
>
> Job upgrades usually happen during the lifetime of a real world Flink job. 
> Therefore, we should add an end-to-end test which exactly covers this 
> scenario. I suggest to do the follwoing:
> # run the general purpose testing job FLINK-8971
> # take a savepoint
> # Modify the job by introducing a new operator and changing the order of 
> others
> # Resume the modified job from the savepoint
> # Verify that everything went correctly



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9185) Potential null dereference in PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives

2018-04-16 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9185:
---

Assignee: mingleizhang

> Potential null dereference in 
> PrioritizedOperatorSubtaskState#resolvePrioritizedAlternatives
> 
>
> Key: FLINK-9185
> URL: https://issues.apache.org/jira/browse/FLINK-9185
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> if (alternative != null
>   && alternative.hasState()
>   && alternative.size() == 1
>   && approveFun.apply(reference, alternative.iterator().next())) {
> {code}
> The return value from approveFun.apply would be unboxed.
> We should check that the return value is not null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9179) Deduplicate WebOptions.PORT and RestOptions.REST_PORT

2018-04-16 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9179:
---

Assignee: mingleizhang

> Deduplicate WebOptions.PORT and RestOptions.REST_PORT
> -
>
> Key: FLINK-9179
> URL: https://issues.apache.org/jira/browse/FLINK-9179
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST, Webfrontend
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Blocker
> Fix For: 1.5.0
>
>
> In the past {{WebOptions.PORT}} was used to configure the port on which the 
> WebUI listens on. With the rework of the REST API we added a new 
> configuration key {{RestOptions.REST_PORT}} to specify on which port the REST 
> API listens on.
> Effectively these 2 options control the same thing, with the rest option 
> being broader and also applicable to components with a REST API but no WebUI.
> I suggest to deprecate WebOptions.PORT, and add a deprecated key to 
> {{RestOptions.REST_PORT}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9180) Remove REST_ prefix from rest options

2018-04-16 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9180:
---

Assignee: mingleizhang

> Remove REST_ prefix from rest options
> -
>
> Key: FLINK-9180
> URL: https://issues.apache.org/jira/browse/FLINK-9180
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> Several fields in the {{RestOptions}} class have a {{REST_}} prefix. So far 
> we went with the convention that we do not have such prefixes if it already 
> contained in the class name, hence we should remove it from the field names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9176) Category annotations are unused

2018-04-15 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438920#comment-16438920
 ] 

mingleizhang commented on FLINK-9176:
-

I would re-introduce them into the surefire configuration. 

> Category annotations are unused
> ---
>
> Key: FLINK-9176
> URL: https://issues.apache.org/jira/browse/FLINK-9176
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> The {{LegacyAndNew}} and {{New}} annotations, that were previously used to 
> disable tests based on whether the {{legacyCode}} profile is active, are 
> effectively unused.
> While several tests are annotated with them they are never used in the actual 
> {{surefire}} configuration.
> We should either re-introduce them into the {{surefire}} configuration, or 
> remove them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9176) Category annotations are unused

2018-04-15 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9176:
---

Assignee: mingleizhang

> Category annotations are unused
> ---
>
> Key: FLINK-9176
> URL: https://issues.apache.org/jira/browse/FLINK-9176
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Tests
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: mingleizhang
>Priority: Critical
> Fix For: 1.5.0
>
>
> The {{LegacyAndNew}} and {{New}} annotations, that were previously used to 
> disable tests based on whether the {{legacyCode}} profile is active, are 
> effectively unused.
> While several tests are annotated with them they are never used in the actual 
> {{surefire}} configuration.
> We should either re-introduce them into the {{surefire}} configuration, or 
> remove them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8993) Add a test operator with keyed state that uses Kryo serializer (registered/unregistered/custom)

2018-04-15 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-8993:
---

Assignee: mingleizhang

> Add a test operator with keyed state that uses Kryo serializer 
> (registered/unregistered/custom)
> ---
>
> Key: FLINK-8993
> URL: https://issues.apache.org/jira/browse/FLINK-8993
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Stefan Richter
>Assignee: mingleizhang
>Priority: Major
>
> Add an operator with keyed state that uses Kryo serializer 
> (registered/unregistered/custom).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9156) CLI does not respect -m,--jobmanager option

2018-04-11 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433690#comment-16433690
 ] 

mingleizhang commented on FLINK-9156:
-

Yes. We should make the CLI as the first choose, can not use the 
{{jobmanager.rpc.address}} in {{flink-conf.yaml}}.

> CLI does not respect -m,--jobmanager option
> ---
>
> Key: FLINK-9156
> URL: https://issues.apache.org/jira/browse/FLINK-9156
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0
> Environment: 1.5 RC1
>Reporter: Gary Yao
>Priority: Blocker
> Fix For: 1.5.0
>
>
> *Description*
> The CLI does not respect the {{-m, --jobmanager}} option. For example 
> submitting a job using 
> {noformat}
> bin/flink run -m 172.31.35.68:6123 [...]
> {noformat}
> results in the client trying to connect to what is specified in 
> {{flink-conf.yaml}}.
> *Stacktrace*
> {noformat}
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Could not submit 
> job 99b0a48ec5cb4086740b1ffd38efd1af.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:244)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
>   at 
> org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:77)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:410)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:780)
>   at 
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:274)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:209)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1019)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1095)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1095)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$4(RestClusterClient.java:351)
>   at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>   at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkException: Could not upload job jar files.
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:326)
>   at 
> java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
>   at 
> java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
>   ... 7 more
> Caused by: org.apache.flink.util.FlinkException: Could not upload job jar 
> files.
>   ... 10 more
> Caused by: java.io.IOException: Could not connect to BlobServer at address 
> /127.0.0.1:41909
>   at org.apache.flink.runtime.blob.BlobClient.(BlobClient.java:124)
>   at 
> org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:547)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$2(RestClusterClient.java:324)
>   ... 9 more
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> 

[jira] [Commented] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-04-10 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432430#comment-16432430
 ] 

mingleizhang commented on FLINK-9087:
-

It seems that [~triones] does not have permission to perform the write 
operation at the moment. I could support helps or committer can give you a 
permission, then you can do it by yourself.

> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8335) Upgrade hbase connector dependency to 1.4.3

2018-04-05 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-8335:

Summary: Upgrade hbase connector dependency to 1.4.3  (was: Upgrade hbase 
connector dependency to 1.4.1)

> Upgrade hbase connector dependency to 1.4.3
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> hbase 1.4.1 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8335) Upgrade hbase connector dependency to 1.4.1

2018-04-05 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16427834#comment-16427834
 ] 

mingleizhang commented on FLINK-8335:
-

Thanks [~yuzhih...@gmail.com]. I will apply 1.4.3 this to here today soon.

> Upgrade hbase connector dependency to 1.4.1
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Priority: Minor
>
> hbase 1.4.1 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8335) Upgrade hbase connector dependency to 1.4.1

2018-04-05 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-8335:
---

Assignee: mingleizhang

> Upgrade hbase connector dependency to 1.4.1
> ---
>
> Key: FLINK-8335
> URL: https://issues.apache.org/jira/browse/FLINK-8335
> Project: Flink
>  Issue Type: Improvement
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> hbase 1.4.1 has been released.
> 1.4.0 shows speed improvement over previous 1.x releases.
> http://search-hadoop.com/m/HBase/YGbbBxedD1Mnm8t?subj=Re+VOTE+The+second+HBase+1+4+0+release+candidate+RC1+is+available
> This issue is to upgrade the dependency to 1.4.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   >