[jira] [Assigned] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0

2018-03-30 Thread vinoyang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-7642:
---

Assignee: vinoyang

> Upgrade maven surefire plugin to 2.21.0
> ---
>
> Key: FLINK-7642
> URL: https://issues.apache.org/jira/browse/FLINK-7642
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Assignee: vinoyang
>Priority: Major
>
> Surefire 2.19 release introduced more useful test filters which would let us 
> run a subset of the test.
> This issue is for upgrading maven surefire plugin to 2.21.0 which contains 
> SUREFIRE-1422



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-8971) Create general purpose testing job

2018-03-30 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421148#comment-16421148
 ] 

mingleizhang edited comment on FLINK-8971 at 3/31/18 2:44 AM:
--

Page not found [heavily misbehaved 
job|https://github.com/rmetzger/flink-testing-jobs/blob/flink-1.4/basic-jobs/src/main/java/com/dataartisans/heavymisbehaved/HeavyMisbehavedJob.java]
 and state machine job also.


was (Author: mingleizhang):
Page not found [heavily misbehaved 
job|https://github.com/rmetzger/flink-testing-jobs/blob/flink-1.4/basic-jobs/src/main/java/com/dataartisans/heavymisbehaved/HeavyMisbehavedJob.java]

> Create general purpose testing job
> --
>
> Key: FLINK-8971
> URL: https://issues.apache.org/jira/browse/FLINK-8971
> Project: Flink
>  Issue Type: Task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.5.0
>
>
> In order to write better end-to-end tests we need a general purpose testing 
> job which comprises as many Flink aspects as possible. These include 
> different types for records and state, user defined components, state types 
> and operators.
> The job should allow to activate a certain misbehavior, such as slowing 
> certain paths down or throwing exceptions to simulate failures.
> The job should come with a data generator which generates input data such 
> that the job can verify it's own behavior. This includes the state as well as 
> the input/output records.
> We already have the [heavily misbehaved 
> job|https://github.com/rmetzger/flink-testing-jobs/blob/flink-1.4/basic-jobs/src/main/java/com/dataartisans/heavymisbehaved/HeavyMisbehavedJob.java]
>  which simulates some misbehavior. There is also the [state machine 
> job|https://github.com/dataArtisans/flink-testing-jobs/tree/master/streaming-state-machine]
>  which can verify itself for invalid state changes which indicate data loss. 
> We should incorporate their characteristics into the new general purpose job.
> Additionally, the general purpose job should contain the following aspects:
> * Job containing a sliding window aggregation
> * At least one generic Kryo type
> * At least one generic Avro type
> * At least one Avro specific record type
> * At least one input type for which we register a Kryo serializer
> * At least one input type for which we provide a user defined serializer
> * At least one state type for which we provide a user defined serializer
> * At least one state type which uses the AvroSerializer
> * Include an operator with ValueState
> * Value state changes should be verified (e.g. predictable series of values)
> * Include an operator with operator state
> * Include an operator with broadcast state
> * Broadcast state changes should be verified (e.g. predictable series of 
> values)
> * Include union state
> * User defined watermark assigner
> The job should be made available in the {{flink-end-to-end-tests}} module.
> This issue is intended to serve as an umbrella issue for developing and 
> extending this job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8971) Create general purpose testing job

2018-03-30 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421148#comment-16421148
 ] 

mingleizhang commented on FLINK-8971:
-

Page not found [heavily misbehaved 
job|https://github.com/rmetzger/flink-testing-jobs/blob/flink-1.4/basic-jobs/src/main/java/com/dataartisans/heavymisbehaved/HeavyMisbehavedJob.java]

> Create general purpose testing job
> --
>
> Key: FLINK-8971
> URL: https://issues.apache.org/jira/browse/FLINK-8971
> Project: Flink
>  Issue Type: Task
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.5.0
>
>
> In order to write better end-to-end tests we need a general purpose testing 
> job which comprises as many Flink aspects as possible. These include 
> different types for records and state, user defined components, state types 
> and operators.
> The job should allow to activate a certain misbehavior, such as slowing 
> certain paths down or throwing exceptions to simulate failures.
> The job should come with a data generator which generates input data such 
> that the job can verify it's own behavior. This includes the state as well as 
> the input/output records.
> We already have the [heavily misbehaved 
> job|https://github.com/rmetzger/flink-testing-jobs/blob/flink-1.4/basic-jobs/src/main/java/com/dataartisans/heavymisbehaved/HeavyMisbehavedJob.java]
>  which simulates some misbehavior. There is also the [state machine 
> job|https://github.com/dataArtisans/flink-testing-jobs/tree/master/streaming-state-machine]
>  which can verify itself for invalid state changes which indicate data loss. 
> We should incorporate their characteristics into the new general purpose job.
> Additionally, the general purpose job should contain the following aspects:
> * Job containing a sliding window aggregation
> * At least one generic Kryo type
> * At least one generic Avro type
> * At least one Avro specific record type
> * At least one input type for which we register a Kryo serializer
> * At least one input type for which we provide a user defined serializer
> * At least one state type for which we provide a user defined serializer
> * At least one state type which uses the AvroSerializer
> * Include an operator with ValueState
> * Value state changes should be verified (e.g. predictable series of values)
> * Include an operator with operator state
> * Include an operator with broadcast state
> * Broadcast state changes should be verified (e.g. predictable series of 
> values)
> * Include union state
> * User defined watermark assigner
> The job should be made available in the {{flink-end-to-end-tests}} module.
> This issue is intended to serve as an umbrella issue for developing and 
> extending this job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-9061) S3 checkpoint data not partitioned well -- causes errors and poor performance

2018-03-30 Thread Jamie Grier (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420885#comment-16420885
 ] 

Jamie Grier edited comment on FLINK-9061 at 3/31/18 1:32 AM:
-

Maybe we should keep this super simple and make a change at the state backend 
level where we optionally just reverse the key name.  That should actually work 
very well.


was (Author: jgrier):
Maybe we should keep this super simple and make a change at the state backend 
level where we optionally just reverse the path.  That should actually work 
very well.

> S3 checkpoint data not partitioned well -- causes errors and poor performance
> -
>
> Key: FLINK-9061
> URL: https://issues.apache.org/jira/browse/FLINK-9061
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystem, State Backends, Checkpointing
>Affects Versions: 1.4.2
>Reporter: Jamie Grier
>Priority: Critical
>
> I think we need to modify the way we write checkpoints to S3 for high-scale 
> jobs (those with many total tasks).  The issue is that we are writing all the 
> checkpoint data under a common key prefix.  This is the worst case scenario 
> for S3 performance since the key is used as a partition key.
>  
> In the worst case checkpoints fail with a 500 status code coming back from S3 
> and an internal error type of TooBusyException.
>  
> One possible solution would be to add a hook in the Flink filesystem code 
> that allows me to "rewrite" paths.  For example say I have the checkpoint 
> directory set to:
>  
> s3://bucket/flink/checkpoints
>  
> I would hook that and rewrite that path to:
>  
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original 
> path
>  
> This would distribute the checkpoint write load around the S3 cluster evenly.
>  
> For reference: 
> https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
>  
> Any other people hit this issue?  Any other ideas for solutions?  This is a 
> pretty serious problem for people trying to checkpoint to S3.
>  
> -Jamie
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9118) Support a time based rollover of part file in Bucketing Sink

2018-03-30 Thread Lakshmi Rao (JIRA)
Lakshmi Rao created FLINK-9118:
--

 Summary: Support a time based rollover of part file in Bucketing 
Sink
 Key: FLINK-9118
 URL: https://issues.apache.org/jira/browse/FLINK-9118
 Project: Flink
  Issue Type: Improvement
  Components: filesystem-connector
Reporter: Lakshmi Rao


In the current implementation, the BucketingSink rolls over a part file based 
on a _batchSize_ 
([here|https://github.com/apache/flink/blob/release-1.4.0/flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.java#L459]).
  Can we also support a roll over based on a constant time interval? This is 
not the same as the _inactiveBucketCheckInterval_ as this bucket is not truly 
inactive, it's still being written to but just has to be flushed every X 
minutes, where X is a user-specified time interval. 

The change would involve tracking a _bucketCreationTime_ in the BucketState 
(much like the _lastWrittenToTime_) whenever a new part file is opened and 
would include a condition to check _currentProcessingTime_ - 
_bucketCreationTime_ > X in the _shouldRoll_ method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9061) S3 checkpoint data not partitioned well -- causes errors and poor performance

2018-03-30 Thread Jamie Grier (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420885#comment-16420885
 ] 

Jamie Grier commented on FLINK-9061:


Maybe we should keep this super simple and make a change at the state backend 
level where we optionally just reverse the path.  That should actually work 
very well.

> S3 checkpoint data not partitioned well -- causes errors and poor performance
> -
>
> Key: FLINK-9061
> URL: https://issues.apache.org/jira/browse/FLINK-9061
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystem, State Backends, Checkpointing
>Affects Versions: 1.4.2
>Reporter: Jamie Grier
>Priority: Critical
>
> I think we need to modify the way we write checkpoints to S3 for high-scale 
> jobs (those with many total tasks).  The issue is that we are writing all the 
> checkpoint data under a common key prefix.  This is the worst case scenario 
> for S3 performance since the key is used as a partition key.
>  
> In the worst case checkpoints fail with a 500 status code coming back from S3 
> and an internal error type of TooBusyException.
>  
> One possible solution would be to add a hook in the Flink filesystem code 
> that allows me to "rewrite" paths.  For example say I have the checkpoint 
> directory set to:
>  
> s3://bucket/flink/checkpoints
>  
> I would hook that and rewrite that path to:
>  
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original 
> path
>  
> This would distribute the checkpoint write load around the S3 cluster evenly.
>  
> For reference: 
> https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
>  
> Any other people hit this issue?  Any other ideas for solutions?  This is a 
> pretty serious problem for people trying to checkpoint to S3.
>  
> -Jamie
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9059) Add support for unified table source and sink declaration in environment file

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420750#comment-16420750
 ] 

ASF GitHub Bot commented on FLINK-9059:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178334057
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/TableDescriptor.scala
 ---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors
+
+import org.apache.flink.table.descriptors.DescriptorProperties.toScala
+import 
org.apache.flink.table.descriptors.StatisticsValidator.{STATISTICS_COLUMNS, 
STATISTICS_ROW_COUNT, readColumnStats}
+import org.apache.flink.table.plan.stats.TableStats
+
+import scala.collection.JavaConverters._
+
+/**
+  * Common class for all descriptors describing table sources and sinks.
+  */
+abstract class TableDescriptor extends Descriptor {
+
+  protected var connectorDescriptor: Option[ConnectorDescriptor] = None
+  protected var formatDescriptor: Option[FormatDescriptor] = None
+  protected var schemaDescriptor: Option[Schema] = None
+  protected var statisticsDescriptor: Option[Statistics] = None
+  protected var metaDescriptor: Option[Metadata] = None
+
+  /**
+* Internal method for properties conversion.
+*/
+  override private[flink] def addProperties(properties: 
DescriptorProperties): Unit = {
+connectorDescriptor.foreach(_.addProperties(properties))
+formatDescriptor.foreach(_.addProperties(properties))
+schemaDescriptor.foreach(_.addProperties(properties))
+metaDescriptor.foreach(_.addProperties(properties))
+  }
+
+  /**
+* Reads table statistics from the descriptors properties.
+*/
+  protected def getTableStats: Option[TableStats] = {
+val normalizedProps = new DescriptorProperties()
+addProperties(normalizedProps)
+val rowCount = 
toScala(normalizedProps.getOptionalLong(STATISTICS_ROW_COUNT))
+rowCount match {
+  case Some(cnt) =>
+val columnStats = readColumnStats(normalizedProps, 
STATISTICS_COLUMNS)
+Some(TableStats(cnt, columnStats.asJava))
+  case None =>
+None
+}
+  }
+}
+
+object TableDescriptor {
+  /**
+* Key for describing the type of this table, valid values are 
('source').
+*/
+  val TABLE_TYPE = "type"
+
+  /**
+* Valid TABLE_TYPE value.
+*/
+  val TABLE_TYPE_SOURCE = "source"
--- End diff --

make sense  


> Add support for unified table source and sink declaration in environment file
> -
>
> Key: FLINK-9059
> URL: https://issues.apache.org/jira/browse/FLINK-9059
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
> Fix For: 1.5.0
>
>
> 1) Add a common property called "type" with single value 'source'.
> 2) in yaml file, replace "sources" with "tables".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9059) Add support for unified table source and sink declaration in environment file

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420749#comment-16420749
 ] 

ASF GitHub Bot commented on FLINK-9059:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178334028
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -29,38 +30,47 @@
 
 /**
  * Environment configuration that represents the content of an environment 
file. Environment files
- * define sources, execution, and deployment behavior. An environment 
might be defined by default or
+ * define tables, execution, and deployment behavior. An environment might 
be defined by default or
  * as part of a session. Environments can be merged or enriched with 
properties (e.g. from CLI command).
  *
  * In future versions, we might restrict the merging or enrichment of 
deployment properties to not
  * allow overwriting of a deployment by a session.
  */
 public class Environment {
 
-   private Map sources;
+   private Map tables;
 
private Execution execution;
 
private Deployment deployment;
 
public Environment() {
-   this.sources = Collections.emptyMap();
+   this.tables = Collections.emptyMap();
this.execution = new Execution();
this.deployment = new Deployment();
}
 
-   public Map getSources() {
-   return sources;
+   public Map getTables() {
+   return tables;
}
 
-   public void setSources(List> sources) {
-   this.sources = new HashMap<>(sources.size());
-   sources.forEach(config -> {
-   final Source s = Source.create(config);
-   if (this.sources.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate source 
name '" + s + "'.");
+   public void setTables(List> tables) {
+   this.tables = new HashMap<>(tables.size());
+   tables.forEach(config -> {
+   if (!config.containsKey(TableDescriptor.TABLE_TYPE())) {
+   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
--- End diff --

Got it, so `both` should probably be added here once `sink` type is 
supported in FLINK-8866. Thanks for the clarification.


> Add support for unified table source and sink declaration in environment file
> -
>
> Key: FLINK-9059
> URL: https://issues.apache.org/jira/browse/FLINK-9059
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
> Fix For: 1.5.0
>
>
> 1) Add a common property called "type" with single value 'source'.
> 2) in yaml file, replace "sources" with "tables".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5758: [FLINK-9059][Table API & SQL] add table type attri...

2018-03-30 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178334028
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -29,38 +30,47 @@
 
 /**
  * Environment configuration that represents the content of an environment 
file. Environment files
- * define sources, execution, and deployment behavior. An environment 
might be defined by default or
+ * define tables, execution, and deployment behavior. An environment might 
be defined by default or
  * as part of a session. Environments can be merged or enriched with 
properties (e.g. from CLI command).
  *
  * In future versions, we might restrict the merging or enrichment of 
deployment properties to not
  * allow overwriting of a deployment by a session.
  */
 public class Environment {
 
-   private Map sources;
+   private Map tables;
 
private Execution execution;
 
private Deployment deployment;
 
public Environment() {
-   this.sources = Collections.emptyMap();
+   this.tables = Collections.emptyMap();
this.execution = new Execution();
this.deployment = new Deployment();
}
 
-   public Map getSources() {
-   return sources;
+   public Map getTables() {
+   return tables;
}
 
-   public void setSources(List> sources) {
-   this.sources = new HashMap<>(sources.size());
-   sources.forEach(config -> {
-   final Source s = Source.create(config);
-   if (this.sources.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate source 
name '" + s + "'.");
+   public void setTables(List> tables) {
+   this.tables = new HashMap<>(tables.size());
+   tables.forEach(config -> {
+   if (!config.containsKey(TableDescriptor.TABLE_TYPE())) {
+   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
--- End diff --

Got it, so `both` should probably be added here once `sink` type is 
supported in FLINK-8866. Thanks for the clarification.


---


[GitHub] flink pull request #5758: [FLINK-9059][Table API & SQL] add table type attri...

2018-03-30 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178334057
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/TableDescriptor.scala
 ---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors
+
+import org.apache.flink.table.descriptors.DescriptorProperties.toScala
+import 
org.apache.flink.table.descriptors.StatisticsValidator.{STATISTICS_COLUMNS, 
STATISTICS_ROW_COUNT, readColumnStats}
+import org.apache.flink.table.plan.stats.TableStats
+
+import scala.collection.JavaConverters._
+
+/**
+  * Common class for all descriptors describing table sources and sinks.
+  */
+abstract class TableDescriptor extends Descriptor {
+
+  protected var connectorDescriptor: Option[ConnectorDescriptor] = None
+  protected var formatDescriptor: Option[FormatDescriptor] = None
+  protected var schemaDescriptor: Option[Schema] = None
+  protected var statisticsDescriptor: Option[Statistics] = None
+  protected var metaDescriptor: Option[Metadata] = None
+
+  /**
+* Internal method for properties conversion.
+*/
+  override private[flink] def addProperties(properties: 
DescriptorProperties): Unit = {
+connectorDescriptor.foreach(_.addProperties(properties))
+formatDescriptor.foreach(_.addProperties(properties))
+schemaDescriptor.foreach(_.addProperties(properties))
+metaDescriptor.foreach(_.addProperties(properties))
+  }
+
+  /**
+* Reads table statistics from the descriptors properties.
+*/
+  protected def getTableStats: Option[TableStats] = {
+val normalizedProps = new DescriptorProperties()
+addProperties(normalizedProps)
+val rowCount = 
toScala(normalizedProps.getOptionalLong(STATISTICS_ROW_COUNT))
+rowCount match {
+  case Some(cnt) =>
+val columnStats = readColumnStats(normalizedProps, 
STATISTICS_COLUMNS)
+Some(TableStats(cnt, columnStats.asJava))
+  case None =>
+None
+}
+  }
+}
+
+object TableDescriptor {
+  /**
+* Key for describing the type of this table, valid values are 
('source').
+*/
+  val TABLE_TYPE = "type"
+
+  /**
+* Valid TABLE_TYPE value.
+*/
+  val TABLE_TYPE_SOURCE = "source"
--- End diff --

make sense 👍 


---


[jira] [Commented] (FLINK-6968) Store streaming, updating tables with unique key in queryable state

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420721#comment-16420721
 ] 

ASF GitHub Bot commented on FLINK-6968:
---

Github user suez1224 commented on the issue:

https://github.com/apache/flink/pull/5688
  
Could you please rebase your pr to resolve conflict? Thanks.


> Store streaming, updating tables with unique key in queryable state
> ---
>
> Key: FLINK-6968
> URL: https://issues.apache.org/jira/browse/FLINK-6968
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Fabian Hueske
>Assignee: Renjie Liu
>Priority: Major
>
> Streaming tables with unique key are continuously updated. For example 
> queries with a non-windowed aggregation generate such tables. Commonly, such 
> updating tables are emitted via an upsert table sink to an external datastore 
> (k-v store, database) to make it accessible to applications.
> This issue is about adding a feature to store and maintain such a table as 
> queryable state in Flink. By storing the table in Flnk's queryable state, we 
> do not need an external data store to access the results of the query but can 
> query the results directly from Flink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5688: [FLINK-6968][Table API & SQL] Add Queryable table sink.

2018-03-30 Thread suez1224
Github user suez1224 commented on the issue:

https://github.com/apache/flink/pull/5688
  
Could you please rebase your pr to resolve conflict? Thanks.


---


[jira] [Commented] (FLINK-9059) Add support for unified table source and sink declaration in environment file

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420715#comment-16420715
 ] 

ASF GitHub Bot commented on FLINK-9059:
---

Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178330131
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/TableDescriptor.scala
 ---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors
+
+import org.apache.flink.table.descriptors.DescriptorProperties.toScala
+import 
org.apache.flink.table.descriptors.StatisticsValidator.{STATISTICS_COLUMNS, 
STATISTICS_ROW_COUNT, readColumnStats}
+import org.apache.flink.table.plan.stats.TableStats
+
+import scala.collection.JavaConverters._
+
+/**
+  * Common class for all descriptors describing table sources and sinks.
+  */
+abstract class TableDescriptor extends Descriptor {
+
+  protected var connectorDescriptor: Option[ConnectorDescriptor] = None
+  protected var formatDescriptor: Option[FormatDescriptor] = None
+  protected var schemaDescriptor: Option[Schema] = None
+  protected var statisticsDescriptor: Option[Statistics] = None
+  protected var metaDescriptor: Option[Metadata] = None
+
+  /**
+* Internal method for properties conversion.
+*/
+  override private[flink] def addProperties(properties: 
DescriptorProperties): Unit = {
+connectorDescriptor.foreach(_.addProperties(properties))
+formatDescriptor.foreach(_.addProperties(properties))
+schemaDescriptor.foreach(_.addProperties(properties))
+metaDescriptor.foreach(_.addProperties(properties))
+  }
+
+  /**
+* Reads table statistics from the descriptors properties.
+*/
+  protected def getTableStats: Option[TableStats] = {
+val normalizedProps = new DescriptorProperties()
+addProperties(normalizedProps)
+val rowCount = 
toScala(normalizedProps.getOptionalLong(STATISTICS_ROW_COUNT))
+rowCount match {
+  case Some(cnt) =>
+val columnStats = readColumnStats(normalizedProps, 
STATISTICS_COLUMNS)
+Some(TableStats(cnt, columnStats.asJava))
+  case None =>
+None
+}
+  }
+}
+
+object TableDescriptor {
+  /**
+* Key for describing the type of this table, valid values are 
('source').
+*/
+  val TABLE_TYPE = "type"
+
+  /**
+* Valid TABLE_TYPE value.
+*/
+  val TABLE_TYPE_SOURCE = "source"
--- End diff --

The convention in the code currently uses just constants, please see 
KafkaValidator or RowtimeValidator. Also, it seems to be premature optimization 
to me given such simple use of the constant.


> Add support for unified table source and sink declaration in environment file
> -
>
> Key: FLINK-9059
> URL: https://issues.apache.org/jira/browse/FLINK-9059
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
> Fix For: 1.5.0
>
>
> 1) Add a common property called "type" with single value 'source'.
> 2) in yaml file, replace "sources" with "tables".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5758: [FLINK-9059][Table API & SQL] add table type attri...

2018-03-30 Thread suez1224
Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178330131
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/TableDescriptor.scala
 ---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors
+
+import org.apache.flink.table.descriptors.DescriptorProperties.toScala
+import 
org.apache.flink.table.descriptors.StatisticsValidator.{STATISTICS_COLUMNS, 
STATISTICS_ROW_COUNT, readColumnStats}
+import org.apache.flink.table.plan.stats.TableStats
+
+import scala.collection.JavaConverters._
+
+/**
+  * Common class for all descriptors describing table sources and sinks.
+  */
+abstract class TableDescriptor extends Descriptor {
+
+  protected var connectorDescriptor: Option[ConnectorDescriptor] = None
+  protected var formatDescriptor: Option[FormatDescriptor] = None
+  protected var schemaDescriptor: Option[Schema] = None
+  protected var statisticsDescriptor: Option[Statistics] = None
+  protected var metaDescriptor: Option[Metadata] = None
+
+  /**
+* Internal method for properties conversion.
+*/
+  override private[flink] def addProperties(properties: 
DescriptorProperties): Unit = {
+connectorDescriptor.foreach(_.addProperties(properties))
+formatDescriptor.foreach(_.addProperties(properties))
+schemaDescriptor.foreach(_.addProperties(properties))
+metaDescriptor.foreach(_.addProperties(properties))
+  }
+
+  /**
+* Reads table statistics from the descriptors properties.
+*/
+  protected def getTableStats: Option[TableStats] = {
+val normalizedProps = new DescriptorProperties()
+addProperties(normalizedProps)
+val rowCount = 
toScala(normalizedProps.getOptionalLong(STATISTICS_ROW_COUNT))
+rowCount match {
+  case Some(cnt) =>
+val columnStats = readColumnStats(normalizedProps, 
STATISTICS_COLUMNS)
+Some(TableStats(cnt, columnStats.asJava))
+  case None =>
+None
+}
+  }
+}
+
+object TableDescriptor {
+  /**
+* Key for describing the type of this table, valid values are 
('source').
+*/
+  val TABLE_TYPE = "type"
+
+  /**
+* Valid TABLE_TYPE value.
+*/
+  val TABLE_TYPE_SOURCE = "source"
--- End diff --

The convention in the code currently uses just constants, please see 
KafkaValidator or RowtimeValidator. Also, it seems to be premature optimization 
to me given such simple use of the constant.


---


[jira] [Commented] (FLINK-9069) Fix some double semicolons to single semicolons, and update checkstyle

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420705#comment-16420705
 ] 

ASF GitHub Bot commented on FLINK-9069:
---

Github user jparkie commented on the issue:

https://github.com/apache/flink/pull/5769
  
@zentol Bump. Just wondering if this got lost. :)


> Fix some double semicolons to single semicolons, and update checkstyle
> --
>
> Key: FLINK-9069
> URL: https://issues.apache.org/jira/browse/FLINK-9069
> Project: Flink
>  Issue Type: Task
>  Components: Checkstyle
>Reporter: Jacob Park
>Assignee: Jacob Park
>Priority: Trivial
>  Labels: starter
>
> As I was reading through the source code, I noticed that there were some 
> double semicolons ";;".
> These should be fixed.
> Finally, the tools/maven/checkstyle.xml should be updated accordingly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5769: [FLINK-9069] Add checkstyle rule to detect multiple conse...

2018-03-30 Thread jparkie
Github user jparkie commented on the issue:

https://github.com/apache/flink/pull/5769
  
@zentol Bump. Just wondering if this got lost. :)


---


[jira] [Commented] (FLINK-1707) Add an Affinity Propagation Library Method

2018-03-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/FLINK-1707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420649#comment-16420649
 ] 

Josep Rubió commented on FLINK-1707:


Is someone still interested on adding AP to Flink?

Thanks!

> Add an Affinity Propagation Library Method
> --
>
> Key: FLINK-1707
> URL: https://issues.apache.org/jira/browse/FLINK-1707
> Project: Flink
>  Issue Type: New Feature
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Josep Rubió
>Priority: Minor
>  Labels: algorithm, requires-design-doc
> Attachments: Binary_Affinity_Propagation_in_Flink_design_doc.pdf
>
>
> This issue proposes adding the an implementation of the Affinity Propagation 
> algorithm as a Gelly library method and a corresponding example.
> The algorithm is described in paper [1] and a description of a vertex-centric 
> implementation can be found is [2].
> [1]: http://www.psi.toronto.edu/affinitypropagation/FreyDueckScience07.pdf
> [2]: http://event.cwi.nl/grades2014/00-ching-slides.pdf
> Design doc:
> https://docs.google.com/document/d/1QULalzPqMVICi8jRVs3S0n39pell2ZVc7RNemz_SGA4/edit?usp=sharing
> Example spreadsheet:
> https://docs.google.com/spreadsheets/d/1CurZCBP6dPb1IYQQIgUHVjQdyLxK0JDGZwlSXCzBcvA/edit?usp=sharing
> Graph:
> https://docs.google.com/drawings/d/1PC3S-6AEt2Gp_TGrSfiWzkTcL7vXhHSxvM6b9HglmtA/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9108) invalid ProcessWindowFunction link in Document

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420628#comment-16420628
 ] 

ASF GitHub Bot commented on FLINK-9108:
---

Github user alpinegizmo commented on the issue:

https://github.com/apache/flink/pull/5785
  
+1


> invalid ProcessWindowFunction link in Document 
> ---
>
> Key: FLINK-9108
> URL: https://issues.apache.org/jira/browse/FLINK-9108
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Matrix42
>Assignee: Matrix42
>Priority: Trivial
> Attachments: QQ截图20180329184203.png
>
>
> !QQ截图20180329184203.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5785: [FLINK-9108][docs] Fix invalid link

2018-03-30 Thread alpinegizmo
Github user alpinegizmo commented on the issue:

https://github.com/apache/flink/pull/5785
  
+1


---


[jira] [Assigned] (FLINK-9087) Return value of broadcastEvent should be closed in StreamTask#performCheckpoint

2018-03-30 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-9087:
---

Assignee: mingleizhang

> Return value of broadcastEvent should be closed in 
> StreamTask#performCheckpoint
> ---
>
> Key: FLINK-9087
> URL: https://issues.apache.org/jira/browse/FLINK-9087
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
>
> {code}
> for (StreamRecordWriter> 
> streamRecordWriter : streamRecordWriters) {
>   try {
> streamRecordWriter.broadcastEvent(message);
> {code}
> The BufferConsumer returned by broadcastEvent() should be closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9112) Let end-to-end tests upload the components logs in case of a failure

2018-03-30 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420582#comment-16420582
 ] 

mingleizhang commented on FLINK-9112:
-

Hi, [~till.rohrmann] I would like to ask, where to upload the component logs ? 

> Let end-to-end tests upload the components logs in case of a failure
> 
>
> Key: FLINK-9112
> URL: https://issues.apache.org/jira/browse/FLINK-9112
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Major
>
> I think it would quite helpful if the end-to-end tests which run as part of a 
> Flink build would upload the component logs in case of a failure similar to 
> how we do it for the maven tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0

2018-03-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7642:
--
Description: 
Surefire 2.19 release introduced more useful test filters which would let us 
run a subset of the test.

This issue is for upgrading maven surefire plugin to 2.21.0 which contains 
SUREFIRE-1422

  was:
Surefire 2.19 release introduced more useful test filters which would let us 
run a subset of the test.


This issue is for upgrading maven surefire plugin to 2.19.1


> Upgrade maven surefire plugin to 2.21.0
> ---
>
> Key: FLINK-7642
> URL: https://issues.apache.org/jira/browse/FLINK-7642
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Priority: Major
>
> Surefire 2.19 release introduced more useful test filters which would let us 
> run a subset of the test.
> This issue is for upgrading maven surefire plugin to 2.21.0 which contains 
> SUREFIRE-1422



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0

2018-03-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7642:
--
Summary: Upgrade maven surefire plugin to 2.21.0  (was: Upgrade maven 
surefire plugin to 2.19.1)

> Upgrade maven surefire plugin to 2.21.0
> ---
>
> Key: FLINK-7642
> URL: https://issues.apache.org/jira/browse/FLINK-7642
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Priority: Major
>
> Surefire 2.19 release introduced more useful test filters which would let us 
> run a subset of the test.
> This issue is for upgrading maven surefire plugin to 2.19.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5795: [hotfix] [docs] fix mistake about Sample code in C...

2018-03-30 Thread mayyamus
GitHub user mayyamus opened a pull request:

https://github.com/apache/flink/pull/5795

[hotfix] [docs] fix  mistake about Sample code in Concepts & Common API


## Brief change log
fix  mistake about Sample code in Concepts & Common API

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:
none
## Documentation
none


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mayyamus/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5795.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5795


commit 532807a4612fd204aa2a365e48b9b9dbf5fee3f9
Author: mayyamus 
Date:   2018-03-30T13:44:47Z

a little mistake in the docment




---


[GitHub] flink issue #5761: [FLINK-8989] [e2eTests] Elasticsearch1&2&5 end to end tes...

2018-03-30 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I think I found how to solve this issue. I will fix it as soon as possible.


---


[jira] [Commented] (FLINK-8989) End-to-end test: ElasticSearch connector

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420461#comment-16420461
 ] 

ASF GitHub Bot commented on FLINK-8989:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I think I found how to solve this issue. I will fix it as soon as possible.



> End-to-end test: ElasticSearch connector
> 
>
> Key: FLINK-8989
> URL: https://issues.apache.org/jira/browse/FLINK-8989
> Project: Flink
>  Issue Type: Sub-task
>  Components: ElasticSearch Connector, Tests
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Major
>
> Similar to FLINK-8988 we should add a end-to-end test which tests the 
> {{ElasticSearch}} connector. We should run against all three supported 
> ElasticSearch versions: 1.x, 2.x and 5.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8989) End-to-end test: ElasticSearch connector

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420460#comment-16420460
 ] 

ASF GitHub Bot commented on FLINK-8989:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I think I found how to solve this issue. I will fix it as soon as possible.


> End-to-end test: ElasticSearch connector
> 
>
> Key: FLINK-8989
> URL: https://issues.apache.org/jira/browse/FLINK-8989
> Project: Flink
>  Issue Type: Sub-task
>  Components: ElasticSearch Connector, Tests
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Major
>
> Similar to FLINK-8988 we should add a end-to-end test which tests the 
> {{ElasticSearch}} connector. We should run against all three supported 
> ElasticSearch versions: 1.x, 2.x and 5.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8989) End-to-end test: ElasticSearch connector

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420463#comment-16420463
 ] 

ASF GitHub Bot commented on FLINK-8989:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I think I found how to solve this issue. I will fix it as soon as possible.


> End-to-end test: ElasticSearch connector
> 
>
> Key: FLINK-8989
> URL: https://issues.apache.org/jira/browse/FLINK-8989
> Project: Flink
>  Issue Type: Sub-task
>  Components: ElasticSearch Connector, Tests
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Major
>
> Similar to FLINK-8988 we should add a end-to-end test which tests the 
> {{ElasticSearch}} connector. We should run against all three supported 
> ElasticSearch versions: 1.x, 2.x and 5.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5761: [FLINK-8989] [e2eTests] Elasticsearch1&2&5 end to end tes...

2018-03-30 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I think I found how to solve this issue. I will fix it as soon as possible.


---


[GitHub] flink issue #5761: [FLINK-8989] [e2eTests] Elasticsearch1&2&5 end to end tes...

2018-03-30 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I think I found how to solve this issue. I will fix it as soon as possible.



---


[jira] [Commented] (FLINK-8989) End-to-end test: ElasticSearch connector

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420326#comment-16420326
 ] 

ASF GitHub Bot commented on FLINK-8989:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I found an issue, I tried find it out but it seems useless.

For elasticsearch5sink, I can run directly with the following command with 
flink cluster setup and elasticsearch5 cluster set up also  without error. 
**[Flink cluster and ES cluster setup by manually.]**

```
$FLINK_DIR/bin/flink run -p 1 
projects/flink/flink-end-to-end-tests/flink-elasticsearch5-test/target/Elasticsearch5SinkExample.jar
 --index index --type type
```

But when I change it to bash scripts to set up flink cluster and es5 
cluster. Although it can be written   to elasticsearch5 without error[we can 
see **Elasticsearch end to end test pass** in the following message]. In the 
end, it will throw an exception.
```

/Users/zhangminglei/projects/flink/flink-end-to-end-tests/test-scripts/../flink-elasticsearch5-test/target/Elasticsearch5SinkExample.jar
Starting execution of program
Program execution finished
Job with JobID 2774e793a167a6bfba07031948fb5b14 has finished.
Job Runtime: 6564 ms
  % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
 Dload  Upload   Total   SpentLeft  
Speed
100  4189  100  41890 0  19737  0 --:--:-- --:--:-- --:--:-- 
19759
Elasticsearch end to end test pass.
Stopping taskexecutor daemon (pid: 86784) on host 
zhangmingleideMacBook-Pro.local.
Stopping standalonesession daemon (pid: 86475) on host 
zhangmingleideMacBook-Pro.local.
Found non-empty .out files:
Exception in thread "Thread-6" java.lang.NoClassDefFoundError: 
org/apache/flink/streaming/connectors/elasticsearch5/shaded/org/jboss/netty/channel/socket/nio/NioWorker$RegisterTask
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioWorker.createRegisterTask(NioWorker.java:118)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.register(AbstractNioSelector.java:104)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioWorker.register(NioWorker.java:36)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:157)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioWorker$RegisterTask
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at 
org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more
One or more tests FAILED.
```


> End-to-end test: ElasticSearch connector
> 
>
> Key: FLINK-8989
> URL: https://issues.apache.org/jira/browse/FLINK-8989
> Project: Flink
>  Issue Type: Sub-task
>  Components: ElasticSearch Connector, Tests
>Reporter: Till Rohrmann
>Assignee: mingleizhang
>Priority: Major
>
> Similar to FLINK-8988 we should add a end-to-end test which tests the 
> {{ElasticSearch}} connector. We should run against all three supported 
> 

[GitHub] flink issue #5761: [FLINK-8989] [e2eTests] Elasticsearch1&2&5 end to end tes...

2018-03-30 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5761
  
I found an issue, I tried find it out but it seems useless.

For elasticsearch5sink, I can run directly with the following command with 
flink cluster setup and elasticsearch5 cluster set up also  without error. 
**[Flink cluster and ES cluster setup by manually.]**

```
$FLINK_DIR/bin/flink run -p 1 
projects/flink/flink-end-to-end-tests/flink-elasticsearch5-test/target/Elasticsearch5SinkExample.jar
 --index index --type type
```

But when I change it to bash scripts to set up flink cluster and es5 
cluster. Although it can be written   to elasticsearch5 without error[we can 
see **Elasticsearch end to end test pass** in the following message]. In the 
end, it will throw an exception.
```

/Users/zhangminglei/projects/flink/flink-end-to-end-tests/test-scripts/../flink-elasticsearch5-test/target/Elasticsearch5SinkExample.jar
Starting execution of program
Program execution finished
Job with JobID 2774e793a167a6bfba07031948fb5b14 has finished.
Job Runtime: 6564 ms
  % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
 Dload  Upload   Total   SpentLeft  
Speed
100  4189  100  41890 0  19737  0 --:--:-- --:--:-- --:--:-- 
19759
Elasticsearch end to end test pass.
Stopping taskexecutor daemon (pid: 86784) on host 
zhangmingleideMacBook-Pro.local.
Stopping standalonesession daemon (pid: 86475) on host 
zhangmingleideMacBook-Pro.local.
Found non-empty .out files:
Exception in thread "Thread-6" java.lang.NoClassDefFoundError: 
org/apache/flink/streaming/connectors/elasticsearch5/shaded/org/jboss/netty/channel/socket/nio/NioWorker$RegisterTask
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioWorker.createRegisterTask(NioWorker.java:118)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.register(AbstractNioSelector.java:104)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioWorker.register(NioWorker.java:36)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:157)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.jboss.netty.channel.socket.nio.NioWorker$RegisterTask
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at 
org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$ChildFirstClassLoader.loadClass(FlinkUserCodeClassLoaders.java:120)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more
One or more tests FAILED.
```


---


[jira] [Created] (FLINK-9117) Disable artifact download in mesos container environment.

2018-03-30 Thread Renjie Liu (JIRA)
Renjie Liu created FLINK-9117:
-

 Summary: Disable artifact download in mesos container environment.
 Key: FLINK-9117
 URL: https://issues.apache.org/jira/browse/FLINK-9117
 Project: Flink
  Issue Type: Bug
  Components: Cluster Management
Affects Versions: 1.5.0
Reporter: Renjie Liu
Assignee: Renjie Liu
 Fix For: 1.6.0


In the current implementation, the mesos fetcher needs to download all 
artifacts from artifact server. However this is not necessary when 
mesos.resourcemanager.tasks.container.type is set to docker since we can 
include docker artifacts in the docker image. So the issue will add a 
configuration so that we can disable downloading of artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9116) Introduce getAll and removeAll for MapState

2018-03-30 Thread Sihua Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou reassigned FLINK-9116:
-

Assignee: Sihua Zhou

> Introduce getAll and removeAll for MapState
> ---
>
> Key: FLINK-9116
> URL: https://issues.apache.org/jira/browse/FLINK-9116
> Project: Flink
>  Issue Type: New Feature
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.6.0
>
>
> We have supported {{putAll(List)}} in {{MapState}}, I think we should also 
> support {{getAll(Iterable)}} and {{removeAll(Iterable)}} in {{MapState}}, it 
> can be convenient in some scenario.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8880) Validate configurations for SQL Client

2018-03-30 Thread Xingcan Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420256#comment-16420256
 ] 

Xingcan Cui commented on FLINK-8880:


Hi [~walterddr], sorry for the late replay. I've been quite busy these days. 
Feel free to take it.

> Validate configurations for SQL Client
> --
>
> Key: FLINK-8880
> URL: https://issues.apache.org/jira/browse/FLINK-8880
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: Xingcan Cui
>Priority: Major
>
> Currently, the configuration items for SQL client are stored in maps and 
> accessed with default values. They should be validated when creating the 
> client. Also, logger warnings should be shown when using default values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9116) Introduce getAll and removeAll for MapState

2018-03-30 Thread Sihua Zhou (JIRA)
Sihua Zhou created FLINK-9116:
-

 Summary: Introduce getAll and removeAll for MapState
 Key: FLINK-9116
 URL: https://issues.apache.org/jira/browse/FLINK-9116
 Project: Flink
  Issue Type: New Feature
  Components: State Backends, Checkpointing
Affects Versions: 1.5.0
Reporter: Sihua Zhou
 Fix For: 1.6.0


We have supported {{putAll(List)}} in {{MapState}}, I think we should also 
support {{getAll(Iterable)}} and {{removeAll(Iterable)}} in {{MapState}}, it 
can be convenient in some scenario.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9059) Add support for unified table source and sink declaration in environment file

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420249#comment-16420249
 ] 

ASF GitHub Bot commented on FLINK-9059:
---

Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178244750
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -102,10 +112,10 @@ public static Environment parse(String content) 
throws IOException {
public static Environment merge(Environment env1, Environment env2) {
final Environment mergedEnv = new Environment();
 
-   // merge sources
-   final Map sources = new 
HashMap<>(env1.getSources());
-   mergedEnv.getSources().putAll(env2.getSources());
-   mergedEnv.sources = sources;
+   // merge tables
+   final Map sources = new 
HashMap<>(env1.getTables());
--- End diff --

Good catch on the naming.


> Add support for unified table source and sink declaration in environment file
> -
>
> Key: FLINK-9059
> URL: https://issues.apache.org/jira/browse/FLINK-9059
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
> Fix For: 1.5.0
>
>
> 1) Add a common property called "type" with single value 'source'.
> 2) in yaml file, replace "sources" with "tables".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5758: [FLINK-9059][Table API & SQL] add table type attri...

2018-03-30 Thread suez1224
Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178244750
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -102,10 +112,10 @@ public static Environment parse(String content) 
throws IOException {
public static Environment merge(Environment env1, Environment env2) {
final Environment mergedEnv = new Environment();
 
-   // merge sources
-   final Map sources = new 
HashMap<>(env1.getSources());
-   mergedEnv.getSources().putAll(env2.getSources());
-   mergedEnv.sources = sources;
+   // merge tables
+   final Map sources = new 
HashMap<>(env1.getTables());
--- End diff --

Good catch on the naming.


---


[jira] [Commented] (FLINK-9059) Add support for unified table source and sink declaration in environment file

2018-03-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420247#comment-16420247
 ] 

ASF GitHub Bot commented on FLINK-9059:
---

Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178244473
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -29,38 +30,47 @@
 
 /**
  * Environment configuration that represents the content of an environment 
file. Environment files
- * define sources, execution, and deployment behavior. An environment 
might be defined by default or
+ * define tables, execution, and deployment behavior. An environment might 
be defined by default or
  * as part of a session. Environments can be merged or enriched with 
properties (e.g. from CLI command).
  *
  * In future versions, we might restrict the merging or enrichment of 
deployment properties to not
  * allow overwriting of a deployment by a session.
  */
 public class Environment {
 
-   private Map sources;
+   private Map tables;
 
private Execution execution;
 
private Deployment deployment;
 
public Environment() {
-   this.sources = Collections.emptyMap();
+   this.tables = Collections.emptyMap();
this.execution = new Execution();
this.deployment = new Deployment();
}
 
-   public Map getSources() {
-   return sources;
+   public Map getTables() {
+   return tables;
}
 
-   public void setSources(List> sources) {
-   this.sources = new HashMap<>(sources.size());
-   sources.forEach(config -> {
-   final Source s = Source.create(config);
-   if (this.sources.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate source 
name '" + s + "'.");
+   public void setTables(List> tables) {
+   this.tables = new HashMap<>(tables.size());
+   tables.forEach(config -> {
+   if (!config.containsKey(TableDescriptor.TABLE_TYPE())) {
+   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
--- End diff --

Yes, the values can be (source, sink and both), please see 
https://issues.apache.org/jira/browse/FLINK-8866.


> Add support for unified table source and sink declaration in environment file
> -
>
> Key: FLINK-9059
> URL: https://issues.apache.org/jira/browse/FLINK-9059
> Project: Flink
>  Issue Type: Task
>  Components: Table API  SQL
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
> Fix For: 1.5.0
>
>
> 1) Add a common property called "type" with single value 'source'.
> 2) in yaml file, replace "sources" with "tables".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5758: [FLINK-9059][Table API & SQL] add table type attri...

2018-03-30 Thread suez1224
Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/5758#discussion_r178244473
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -29,38 +30,47 @@
 
 /**
  * Environment configuration that represents the content of an environment 
file. Environment files
- * define sources, execution, and deployment behavior. An environment 
might be defined by default or
+ * define tables, execution, and deployment behavior. An environment might 
be defined by default or
  * as part of a session. Environments can be merged or enriched with 
properties (e.g. from CLI command).
  *
  * In future versions, we might restrict the merging or enrichment of 
deployment properties to not
  * allow overwriting of a deployment by a session.
  */
 public class Environment {
 
-   private Map sources;
+   private Map tables;
 
private Execution execution;
 
private Deployment deployment;
 
public Environment() {
-   this.sources = Collections.emptyMap();
+   this.tables = Collections.emptyMap();
this.execution = new Execution();
this.deployment = new Deployment();
}
 
-   public Map getSources() {
-   return sources;
+   public Map getTables() {
+   return tables;
}
 
-   public void setSources(List> sources) {
-   this.sources = new HashMap<>(sources.size());
-   sources.forEach(config -> {
-   final Source s = Source.create(config);
-   if (this.sources.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate source 
name '" + s + "'.");
+   public void setTables(List> tables) {
+   this.tables = new HashMap<>(tables.size());
+   tables.forEach(config -> {
+   if (!config.containsKey(TableDescriptor.TABLE_TYPE())) {
+   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
--- End diff --

Yes, the values can be (source, sink and both), please see 
https://issues.apache.org/jira/browse/FLINK-8866.


---


[jira] [Resolved] (FLINK-9105) Table program compiles failed

2018-03-30 Thread Bob Lau (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Lau resolved FLINK-9105.

   Resolution: Fixed
Fix Version/s: 1.5.0
 Release Note: Flink 1.5-SNAPSHOT  commons-compiler  needs to improve the 
version to compile the program.

I exclude the dependent commons-compiler from the calcite dependency,   and 
have an additional dependency on commons-compiler with 3.0.7 version , and I 
can compile it success...

> Table program compiles failed
> -
>
> Key: FLINK-9105
> URL: https://issues.apache.org/jira/browse/FLINK-9105
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Bob Lau
>Priority: Major
> Fix For: 1.5.0
>
>
> ExceptionStack:
> org.apache.flink.client.program.ProgramInvocationException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
> at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:253)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:463)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:456)
> at 
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.executeRemotely(RemoteStreamEnvironment.java:219)
> at 
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.execute(RemoteStreamEnvironment.java:178)
> at 
> com.tydic.tysc.job.service.SubmitJobService.submitJobToStandaloneCluster(SubmitJobService.java:150)
> at 
> com.tydic.tysc.rest.SubmitJobController.submitJob(SubmitJobController.java:55)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
> at 
> org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:133)
> at 
> org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
> at 
> org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
> at 
> org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
> at 
> org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967)
> at 
> org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901)
> at 
> org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
> at 
> org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:661)
> at 
> org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
> at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
> at 
> org.springframework.boot.web.filter.ApplicationContextHeaderFilter.doFilterInternal(ApplicationContextHeaderFilter.java:55)
> at 
> org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
> at 
> org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:112)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
> at 
> com.tydic.tysc.filter.ShiroSessionFilter.doFilter(ShiroSessionFilter.java:51)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
> at 
>