[jira] [Created] (FLINK-11987) Kafka producer occasionally throws NullpointerException

2019-03-20 Thread LIU Xiao (JIRA)
LIU Xiao created FLINK-11987:


 Summary: Kafka producer occasionally throws NullpointerException
 Key: FLINK-11987
 URL: https://issues.apache.org/jira/browse/FLINK-11987
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.7.2, 1.6.4, 1.6.3
 Environment: Flink 1.6.2 (Standalone Cluster)

Oracle JDK 1.8u151

Centos 7.4
Reporter: LIU Xiao


We are using Flink 1.6.2 in our production environment, and kafka producer 
occasionally throws NullpointerException.

We found in line 175 of 
flink/flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java,
 NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR was created as a static variable.

Then in line 837, 
"context.getOperatorStateStore().getUnionListState(NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR);"
 was called, and that leads to line 734 of 
 
flink/flink-runtime/src/main/java/org/apache/flink/runtime/state/DefaultOperatorStateBackend.java:
 "stateDescriptor.initializeSerializerUnlessSet(getExecutionConfig());"

In function initializeSerializerUnlessSet(line 283 of 
flink/flink-core/src/main/java/org/apache/flink/api/common/state/StateDescriptor.java):

if (serializer == null) {
checkState(typeInfo != null, "no serializer and no type info");
// instantiate the serializer
serializer = typeInfo.createSerializer(executionConfig);
// we can drop the type info now, no longer needed
typeInfo  = null;
}

"serializer = typeInfo.createSerializer(executionConfig);" is the line which 
throws the exception.

We think that's because multiple subtasks of the same producer in a same 
TaskManager share a same NEXT_TRANSACTIONAL_ID_HINT_DESCRIPTOR.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11982) org.apache.flink.table.api.NoMatchingTableFactoryException

2019-03-20 Thread pingle wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797876#comment-16797876
 ] 

pingle wang commented on FLINK-11982:
-

sorry, I use BatchEnv. StreamEnv support. so this issue can close or BatchEnv 
support Json BatchTableSource/Sink. thanks

> org.apache.flink.table.api.NoMatchingTableFactoryException
> --
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> .deriveSchema()
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11982) org.apache.flink.table.api.NoMatchingTableFactoryException

2019-03-20 Thread pingle wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797846#comment-16797846
 ] 

pingle wang commented on FLINK-11982:
-

[~frank wang] so do you think this issue?

> org.apache.flink.table.api.NoMatchingTableFactoryException
> --
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> .deriveSchema()
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] chermenin commented on a change in pull request #8022: [FLINK-11968] [runtime] Fixed SingleElementIterator and removed duplicate

2019-03-20 Thread GitBox
chermenin commented on a change in pull request #8022: [FLINK-11968] [runtime] 
Fixed SingleElementIterator and removed duplicate
URL: https://github.com/apache/flink/pull/8022#discussion_r267627267
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/SingleElementIterator.java
 ##
 @@ -64,6 +64,7 @@ public void remove() {
 
@Override
public Iterator iterator() {
+   available = true;
 
 Review comment:
   @wuchong In my opinion, we even need to return a new instance of the 
iterator and use `set()` method with `current` as the argument, to avoid 
modifications of the current instance. How do you look at that?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-8585) Efficiently handle read-only and non array ByteBuffers in DataInputDeserializer

2019-03-20 Thread Louis Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Louis Xu reassigned FLINK-8585:
---

Assignee: Louis Xu  (was: Bowen Li)

> Efficiently handle read-only and non array ByteBuffers in 
> DataInputDeserializer
> ---
>
> Key: FLINK-8585
> URL: https://issues.apache.org/jira/browse/FLINK-8585
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.5.0
>Reporter: Piotr Nowojski
>Assignee: Louis Xu
>Priority: Major
>
> Currently read-only and non array based buffers are handled inefficiently by 
> DataInputSerializer, since they require additional data copy to an internal 
> array. This could be replaced by using a ByteBuffer as a data source instead 
> of byte[].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267622641
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -202,28 +233,97 @@ public TaskKvStateRegistry 
createKvStateTaskRegistry(JobID jobId, JobVertexID jo
return kvStateRegistry.createTaskRegistry(jobId, jobVertexId);
}
 
+   public ResultPartition[] getResultPartitionWriters(String taskId) {
+   return allPartitions.get(taskId);
+   }
+
+   public SingleInputGate[] getInputGates(String taskId) {
+   return allInputGates.get(taskId);
+   }
+
// 

-   //  Task operations
+   //  Create Output Writers and Input Readers
// 

 
-   public void registerTask(Task task) throws IOException {
-   final ResultPartition[] producedPartitions = 
task.getProducedPartitions();
+   public ResultPartitionWriter[] createResultPartitionWriters(
+   String taskId,
+   JobID jobId,
+   ExecutionAttemptID executionId,
+   TaskActions taskActions,
+   ResultPartitionConsumableNotifier 
partitionConsumableNotifier,
+   TaskIOMetricGroup metrics,
+   Collection 
resultPartitionDeploymentDescriptors) throws IOException {
+
+   if (isShutdown) {
+   throw new IllegalStateException("NetworkEnvironment is 
shut down");
+   }
 
-   synchronized (lock) {
-   if (isShutdown) {
-   throw new 
IllegalStateException("NetworkEnvironment is shut down");
-   }
+   ResultPartition[] parititons = new 
ResultPartition[resultPartitionDeploymentDescriptors.size()];
+   int counter = 0;
+   for (ResultPartitionDeploymentDescriptor rpdd : 
resultPartitionDeploymentDescriptors) {
+   parititons[counter] = new ResultPartition(
+   taskId,
+   taskActions,
+   jobId,
+   new ResultPartitionID(rpdd.getPartitionId(), 
executionId),
+   rpdd.getPartitionType(),
+   rpdd.getNumberOfSubpartitions(),
+   rpdd.getMaxParallelism(),
+   resultPartitionManager,
+   partitionConsumableNotifier,
+   ioManager,
+   rpdd.sendScheduleOrUpdateConsumersMessage());
+
+   setupPartition(parititons[counter]);
+   counter++;
+   }
 
-   for (final ResultPartition partition : 
producedPartitions) {
-   setupPartition(partition);
+   allPartitions.put(taskId, parititons);
+
+   metrics.initializeOutputBufferMetrics(parititons);
 
 Review comment:
   Yes, this change is not considered before. The current `IOMetrics` relies on 
specific `ResultPartition` not regular `ResultPartitionWriter` interface. The 
final form of ShuffleService interface might include 
`{taskIdentifier(executionId or taskId), RPDD, metrics}`.
   
   It would seem better if we could make `IOMetricGroup` created inside netty 
`ShuffleService` to make interface clean. But I have not thought through 
whether it is feasible to do like that.
   From another aspect, if we pass `IOMetricGroup` to the interface as now, it 
might also seem reasonable because the regular metric framework could be also 
suitable for other writers. I referred to `ShuffleManager#getReader|getWriter` 
in spark before, and it also has the similar metric component in the interface.
   
   After the metrics initialization is migrated into `ShuffleService` stack as 
now, `Task` can only reference with regular `ResultPartitionWriter` and 
`InputGate` arrays to solve some issues in #7549 .
   
   Do you have further suggestions for metrics?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #8022: [FLINK-11968] [runtime] Fixed SingleElementIterator and removed duplicate

2019-03-20 Thread GitBox
wuchong commented on a change in pull request #8022: [FLINK-11968] [runtime] 
Fixed SingleElementIterator and removed duplicate
URL: https://github.com/apache/flink/pull/8022#discussion_r267622518
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/SingleElementIterator.java
 ##
 @@ -64,6 +64,7 @@ public void remove() {
 
@Override
public Iterator iterator() {
+   available = true;
 
 Review comment:
   I think it's better to set `available` to true only if `current` element is 
not null.
   
   ```java
   if (current != null) {
 this.available = true;
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267620888
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -88,6 +104,12 @@
 
private final boolean enableCreditBased;
 
+   private final boolean enableNetworkDetailedMetrics;
+
+   private final ConcurrentHashMap 
allPartitions;
 
 Review comment:
   Actually there is another consideration for this issue. 
   Currently we pass both `taskId` and `executionId` in the method of 
`ShuffleService#createResultPartitionWriters`. The parameters seem a bit 
duplicated  and not clean. The `taskId` is used internally in `ResultPartition` 
for debugging log and it is more readable than `executionId`. The `executionId` 
parameter could be removed future from this method if it is contained directly 
from `ResultPartitionDeploymentDescriptor` which is proposed in #7835 . So I 
used `taskId` for index here for that consideration.
   If we do not need `taskId` for debugging info in `ResultPartition`, then we 
can remove this parameter and only retain `executionId` for index. Or we retain 
both parameters as now here and change to use `executionId` for index, then in 
future we could consider how to handle `taskId` to make the interface simple?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267619685
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -292,40 +392,31 @@ public void unregisterTask(Task task) {
LOG.debug("Unregister task {} from network environment (state: 
{}).",
task.getTaskInfo().getTaskNameWithSubtasks(), 
task.getExecutionState());
 
-   final ExecutionAttemptID executionId = task.getExecutionId();
-
synchronized (lock) {
if (isShutdown) {
// no need to do anything when we are not 
operational
return;
}
 
if (task.isCanceledOrFailed()) {
-   
resultPartitionManager.releasePartitionsProducedBy(executionId, 
task.getFailureCause());
+   
resultPartitionManager.releasePartitionsProducedBy(task.getExecutionId(), 
task.getFailureCause());
}
 
-   for (ResultPartition partition : 
task.getProducedPartitions()) {
+   for (ResultPartition partition : 
allPartitions.get(task.getTaskInfo().getTaskId())) {
 
 Review comment:
   Yes, I agree with your idea. 
   Even though we remove the first log and use `executionId` for index, we 
still need to consider how to refactor the uses of `task.isCanceledOrFailed()` 
and `task.getFailureCause()` in this method. I would try to refactor it in this 
way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
zhijiangW commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267611334
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -202,28 +233,97 @@ public TaskKvStateRegistry 
createKvStateTaskRegistry(JobID jobId, JobVertexID jo
return kvStateRegistry.createTaskRegistry(jobId, jobVertexId);
}
 
+   public ResultPartition[] getResultPartitionWriters(String taskId) {
 
 Review comment:
   That is a good idea for removing gutters from `ShuffleService` to make 
interface simple, then let task still maintain the arrays.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on issue #7938: [FLINK-10941] Keep slots which contain unconsumed result partitions (on top of #7186)

2019-03-20 Thread GitBox
zhijiangW commented on issue #7938: [FLINK-10941] Keep slots which contain 
unconsumed result partitions (on top of #7186)
URL: https://github.com/apache/flink/pull/7938#issuecomment-475094356
 
 
   After some further discussions with @azagrebin online, we reached an 
agreement for the current consideration. LGTM on my side. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11982) org.apache.flink.table.api.NoMatchingTableFactoryException

2019-03-20 Thread pingle wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797735#comment-16797735
 ] 

pingle wang commented on FLINK-11982:
-

I add flink-json jar and  
/META-INF/services/org.apache.flink.table.factories.TableFactory to my project 
resource directory. TableFactory info: 

org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.table.sinks.CsvBatchTableSinkFactory
org.apache.flink.table.sinks.CsvAppendTableSinkFactory
org.apache.flink.formats.avro.AvroRowFormatFactory
org.apache.flink.formats.json.JsonRowFormatFactory

also occur this exception.

 

 

> org.apache.flink.table.api.NoMatchingTableFactoryException
> --
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> .deriveSchema()
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] klion26 commented on a change in pull request #8008: [FLINK-11963][History Server]Add time-based cleanup mechanism in history server

2019-03-20 Thread GitBox
klion26 commented on a change in pull request #8008: [FLINK-11963][History 
Server]Add time-based cleanup mechanism in history server
URL: https://github.com/apache/flink/pull/8008#discussion_r267608121
 
 

 ##
 File path: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
 ##
 @@ -123,7 +125,9 @@ void stop() {
 
private static final String JSON_FILE_ENDING = ".json";
 
-   JobArchiveFetcherTask(List 
refreshDirs, File webDir, CountDownLatch numFinishedPolls) {
+   JobArchiveFetcherTask(long retainedApplicationsMillis, 
List refreshDirs, File webDir,
+   CountDownLatch 
numFinishedPolls) {
+   this.retainedApplicationsMillis = 
retainedApplicationsMillis;
 
 Review comment:
   Maybe we could add a test logic whether the passed in parameter 
`retainedApplicationsMillis` is positive or not here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11982) org.apache.flink.table.api.NoMatchingTableFactoryException

2019-03-20 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797728#comment-16797728
 ] 

frank wang commented on FLINK-11982:


it looks like you have not provide enough properties

> org.apache.flink.table.api.NoMatchingTableFactoryException
> --
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> .deriveSchema()
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11982) org.apache.flink.table.api.NoMatchingTableFactoryException

2019-03-20 Thread frank wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

frank wang reassigned FLINK-11982:
--

Assignee: frank wang

> org.apache.flink.table.api.NoMatchingTableFactoryException
> --
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: API / Table SQL
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> .deriveSchema()
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] klion26 commented on a change in pull request #8008: [FLINK-11963][History Server]Add time-based cleanup mechanism in history server

2019-03-20 Thread GitBox
klion26 commented on a change in pull request #8008: [FLINK-11963][History 
Server]Add time-based cleanup mechanism in history server
URL: https://github.com/apache/flink/pull/8008#discussion_r267607149
 
 

 ##
 File path: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
 ##
 @@ -153,9 +157,20 @@ public void run() {
continue;
}
boolean updateOverview = false;
+   long now = System.currentTimeMillis();
for (FileStatus jobArchive : 
jobArchives) {
Path jobArchivePath = 
jobArchive.getPath();
String jobID = 
jobArchivePath.getName();
+   if (retainedApplicationsMillis 
> 0L && now - jobArchive.getModificationTime() > retainedApplicationsMillis) {
+   if 
(LOG.isDebugEnabled()) {
+   
LOG.debug("delete the old archived job for path {}." + 
jobArchivePath.toString());
+   }
+   
jobArchivePath.getFileSystem().delete(jobArchivePath, false);
+   continue;
+   } else {
+   LOG.warn("Negative or 
zero values {} of the historyserver.archive.fs.retained-application-millis " +
 
 Review comment:
   Maybe we will come into the else branch when `retainedApplicationsMillis` > 
0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on issue #7921: [FLINK-11825][StateBackends] Resolve name clash of StateTTL TimeCharacteristic class

2019-03-20 Thread GitBox
klion26 commented on issue #7921: [FLINK-11825][StateBackends] Resolve name 
clash of StateTTL TimeCharacteristic class
URL: https://github.com/apache/flink/pull/7921#issuecomment-475090093
 
 
   thank you all for the review :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11872) update lz4 license file

2019-03-20 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-11872.
--
   Resolution: Fixed
Fix Version/s: 1.9.0

fixed in 5bd7ed4364497577709e9225f0522adb0b428e7a

> update lz4 license file
> ---
>
> Key: FLINK-11872
> URL: https://issues.apache.org/jira/browse/FLINK-11872
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Kurt Young
>Assignee: Kurt Young
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung merged pull request #7952: [FLINK-11872][table-runtime-blink] update lz4 license file.

2019-03-20 Thread GitBox
KurtYoung merged pull request #7952: [FLINK-11872][table-runtime-blink] update 
lz4 license file.
URL: https://github.com/apache/flink/pull/7952
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8027: [FLINK-11972] [docs] Add necessary notes about running streaming bucketing e2e test in README

2019-03-20 Thread GitBox
sunjincheng121 commented on a change in pull request #8027: [FLINK-11972] 
[docs] Add necessary notes about running streaming bucketing e2e test in README
URL: https://github.com/apache/flink/pull/8027#discussion_r267589148
 
 

 ##
 File path: flink-end-to-end-tests/README.md
 ##
 @@ -33,6 +33,12 @@ $ FLINK_DIR= 
flink-end-to-end-tests/run-single-test.sh your_test.sh a
 
 **NOTICE**: Please _DON'T_ run the scripts with explicit command like ```sh 
run-nightly-tests.sh``` since ```#!/usr/bin/env bash``` is specified as the 
header of the scripts to assure flexibility on different systems.
 
+### Streaming bucketing test
+
+Before running this nightly test case (test_streaming_bucketing.sh), please 
make sure to run `mvn -DskipTests install` in the `flink-end-to-end-tests` 
directory, so jar files necessary for the test like 
`BucketingSinkTestProgram.jar` could be generated.
+
+What's more, starting from 1.8.0 release it's required to make sure that 
`HADOOP_CLASSPATH` is [correctly 
set](https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/hadoop.html)
 or the pre-bundled hadoop jar has been put into the `lib` folder of the 
`FLINK_DIR`
+
 
 Review comment:
   Can we add the description of where the pre-packaged Hadoop JAR can be 
download? :-)
   Please refer to the section on download Hadoop in the 1.8 release note. PR: 
Https://github.com/apache/flink-web/pull/179/files


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797646#comment-16797646
 ] 

Yu Li commented on FLINK-11972:
---

Checked and confirmed with hadoop 2.6.5 the same test passed in both 
environment, either the [shaded bundle 
|https://repository.apache.org/content/repositories/orgapacheflink-1213/org/apache/flink/flink-shaded-hadoop2-uber/2.6.5-1.8.0/flink-shaded-hadoop2-uber-2.6.5-1.8.0.jar]
 or [hadoop dist|http://archive.apache.org/dist/hadoop/core/hadoop-2.6.5/] way.

Output of the script in the passing state:
{noformat}
Truncating 
/home/jueding.ly/flink_rc_check/flink-1.8.0-src/flink-end-to-end-tests/test-scripts/temp-test-directory-06210353709/out/result3/part-3-0
 to 51250
1+0 records in
1+0 records out
51250 bytes (51 kB) copied, 0.000377998 s, 136 MB/s
Truncating 
/home/jueding.ly/flink_rc_check/flink-1.8.0-src/flink-end-to-end-tests/test-scripts/temp-test-directory-06210353709/out/result7/part-3-0
 to 51250
1+0 records in
1+0 records out
51250 bytes (51 kB) copied, 0.00033118 s, 155 MB/s
pass Bucketing Sink
{noformat}

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-03-21-06-26-49-787.png, screenshot-1.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11972:
---
Labels: pull-request-available  (was: )

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2019-03-21-06-26-49-787.png, screenshot-1.png
>
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8027: [FLINK-11972] [docs] Add necessary notes about running streaming bucketing e2e test in README

2019-03-20 Thread GitBox
flinkbot commented on issue #8027: [FLINK-11972] [docs] Add necessary notes 
about running streaming bucketing e2e test in README
URL: https://github.com/apache/flink/pull/8027#issuecomment-475058251
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] carp84 opened a new pull request #8027: [FLINK-11972] [docs] Add necessary notes about running streaming bucketing e2e test in README

2019-03-20 Thread GitBox
carp84 opened a new pull request #8027: [FLINK-11972] [docs] Add necessary 
notes about running streaming bucketing e2e test in README
URL: https://github.com/apache/flink/pull/8027
 
 
   ## What is the purpose of the change
   
   * Add necessary notes to make it easier for running streaming bucketing end 
to end test case
   
   
   ## Brief change log
   
   * Mainly two notes added, one for downloading hadoop bundles, the other for 
running mvn install to generated required jars.
   
   
   ## Verifying this change
   
   This change is a document improvement without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread sunjincheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-11972:

Attachment: screenshot-1.png

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
> Attachments: image-2019-03-21-06-26-49-787.png, screenshot-1.png
>
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread sunjincheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-11972:

Attachment: image-2019-03-21-06-26-49-787.png

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
> Attachments: image-2019-03-21-06-26-49-787.png
>
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797614#comment-16797614
 ] 

Yu Li commented on FLINK-11972:
---

I tried the streaming bucket case on two environments, Mac 10.14.3 and Linux 
3.10.0, both failed. I'm running the test with the shaded hadoop 2.8.3 bundle 
which could be downloaded 
[here|https://repository.apache.org/content/repositories/orgapacheflink-1213/org/apache/flink/flink-shaded-hadoop2-uber/2.8.3-1.8.0/flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar].
 It seems to me like a real issue instead of environment problem, and I think 
we need to further investigate into it. [~sunjincheng121] [~aljoscha]

>From the output of the test script, we could see below messages:
{noformat}
Starting taskexecutor daemon on host z05f06378.sqa.zth.
Waiting for job (905ae10bae4b99031e724b9c29f0ca7b) to reach terminal state 
FINISHED ...
Truncating buckets
Truncating  to
{noformat}
And in standalonesession log, we could confirm the job is finished:
{noformat}
2019-03-21 05:59:59,512 INFO  
org.apache.flink.runtime.dispatcher.StandaloneDispatcher  - Job 
905ae10bae4b99031e724b9c29f0ca7b reached globally terminal state FINISHED.
{noformat}

Checking the {{test_streaming_bucketing.sh}} script, we could see it runs to 
below lines:
{noformat}
LOG_LINES=$(grep -rnw $FLINK_DIR/log -e 'Writing valid-length file')

# perform truncate on every line
echo "Truncating buckets"
while read -r LOG_LINE; do
  PART=$(echo "$LOG_LINE" | awk '{ print $10 }' FS=" ")
  LENGTH=$(echo "$LOG_LINE" | awk '{ print $15 }' FS=" ")

  echo "Truncating $PART to $LENGTH"

  dd if=$PART of="$PART.truncated" bs=$LENGTH count=1
  rm $PART
  mv "$PART.truncated" $PART
done <<< "$LOG_LINES"
{noformat}

However, when trying to grep the "Writing valid-length file" message in log 
dir, *nothing appeared*. Checking the task-executor log, observed something 
suspicious:
{noformat}
2019-03-21 05:59:59,486 DEBUG 
org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink  - Moving 
in-progress bucket 
/home/jueding.ly/flink_rc_check/flink-1.8.0-src/flink-end-to-end-tests/test-scripts/temp-test-directory-53249980906/out/result4/_part-0-1.in-progress
 to pending file 
/home/jueding.ly/flink_rc_check/flink-1.8.0-src/flink-end-to-end-tests/test-scripts/temp-test-directory-53249980906/out/result4/_part-0-1.pending
{noformat}

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(Clu

[GitHub] [flink] flinkbot commented on issue #8026: [FLINK-5490] prevent ContextEnvironment#getExecutionPlan() from clear…

2019-03-20 Thread GitBox
flinkbot commented on issue #8026: [FLINK-5490] prevent 
ContextEnvironment#getExecutionPlan() from clear…
URL: https://github.com/apache/flink/pull/8026#issuecomment-475049828
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] npav commented on issue #3977: [FLINK-5490] Not allowing sinks to be cleared when getting the execut…

2019-03-20 Thread GitBox
npav commented on issue #3977: [FLINK-5490] Not allowing sinks to be cleared 
when getting the execut…
URL: https://github.com/apache/flink/pull/3977#issuecomment-475049924
 
 
   Included the fix + test in https://github.com/apache/flink/pull/8026


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] npav opened a new pull request #8026: [FLINK-5490] prevent ContextEnvironment#getExecutionPlan() from clear…

2019-03-20 Thread GitBox
npav opened a new pull request #8026: [FLINK-5490] prevent 
ContextEnvironment#getExecutionPlan() from clear…
URL: https://github.com/apache/flink/pull/8026
 
 
   …ing the sinks
   
   ## What is the purpose of the change
   
   This pull request prevents ContextEnvironment#getExecutionPlan() from 
clearing the sinks. It allows for consecutive calls of 
ContextEnvironment#getExecutionPlan() or subsequent calls of 
ContextEnvironment#getExecutionPlan() and ExecutionEnvironment#execute(), which 
isn't currently possible because an exception about having no defined sinks is 
thrown.
   
   ## Brief change log
   
   * ContextEnvironment#getExecutionPlan() now calls 
ExecutionEnvironment#createProgramPlan() with the clearSinks flag explicitly 
set to "false" (it defaults to "true" if not set)
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   * Added a unit test which performs two consecutive calls to 
ContextEnvironment#getExecutionPlan(). This test throws an exception if t he 
execution plan is cleared in-between calls. It is failing with the current code 
and succeeding with this patch.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797561#comment-16797561
 ] 

sunjincheng commented on FLINK-11972:
-

Sounds good!

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11971) Fix `Command: start_kubernetes_if_not_ruunning failed` error

2019-03-20 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797516#comment-16797516
 ] 

sunjincheng commented on FLINK-11971:
-

Thanks for taking this JIRA. [~hequn8128]!
Due to the release-1.8 new RC should be coming ASAP. So [~aljoscha] open the 
PR. I will merge it.
Anyway, Thanks for your efforts. 

Best,
Jincheng.

> Fix `Command: start_kubernetes_if_not_ruunning failed` error
> 
>
> Key: FLINK-11971
> URL: https://issues.apache.org/jira/browse/FLINK-11971
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When I did the end-to-end test under Mac OS, I found the following two 
> problems:
>  1. The verification returned for different `minikube status` is not enough 
> for the robustness. The strings returned by different versions of different 
> platforms are different. the following misjudgment is caused:
>  When the `Command: start_kubernetes_if_not_ruunning failed` error occurs, 
> the `minikube` has actually started successfully. The core reason is that 
> there is a bug in the `test_kubernetes_embedded_job.sh` script.  The error 
> message as follows:
>  !screenshot-1.png! 
> {code:java}
> Current check logic: echo ${status} | grep -q "minikube: Running cluster: 
> Running kubectl: Correctly Configured"
>  My local info
> jinchengsunjcs-iMac:flink-1.8.0 jincheng$ minikube status
> host: Running
> kubelet: Running
> apiserver: Running
> kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.101{code}
> So, I think we should improve the check logic of `minikube status`, What do 
> you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797528#comment-16797528
 ] 

Yu Li edited comment on FLINK-11972 at 3/20/19 8:37 PM:


bq. I appreciate if you want to take this ticket and fix all of them?
Sure, let me take this one. Find one more possible issue and locating the root 
cause now, will submit a PR once done. Thanks. [~sunjincheng121]


was (Author: carp84):
bq. I appreciate if you want to take this ticket and fix all of them?
Sure, let me take this one. Find one more possible issue and locating the root 
cause, will submit a PR once done. Thanks. [~sunjincheng121]

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread Yu Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li reassigned FLINK-11972:
-

Assignee: Yu Li

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Yu Li
>Priority: Major
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797528#comment-16797528
 ] 

Yu Li commented on FLINK-11972:
---

bq. I appreciate if you want to take this ticket and fix all of them?
Sure, let me take this one. Find one more possible issue and locating the root 
cause, will submit a PR once done. Thanks. [~sunjincheng121]

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Priority: Major
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797524#comment-16797524
 ] 

sunjincheng commented on FLINK-11972:
-

Hi [~carp84] Thanks for your feedback! and do the deep testing!  I appreciate 
if you want to take this ticket and fix all of them?

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Priority: Major
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11971) Fix `Command: start_kubernetes_if_not_ruunning failed` error

2019-03-20 Thread sunjincheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng closed FLINK-11971.
---
   Resolution: Fixed
Fix Version/s: 1.9.0
   1.8.0
 Release Note: Fix kubernetes check in end-to-end test.

Fixed in master: 56d81e0dcadd07d634f92ac464a49d3313f56621
Fixed in release-1.8: eb571567ccbf5cb663e91cbbe3d9d8685b6b52fa

> Fix `Command: start_kubernetes_if_not_ruunning failed` error
> 
>
> Key: FLINK-11971
> URL: https://issues.apache.org/jira/browse/FLINK-11971
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0, 1.9.0
>
> Attachments: screenshot-1.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When I did the end-to-end test under Mac OS, I found the following two 
> problems:
>  1. The verification returned for different `minikube status` is not enough 
> for the robustness. The strings returned by different versions of different 
> platforms are different. the following misjudgment is caused:
>  When the `Command: start_kubernetes_if_not_ruunning failed` error occurs, 
> the `minikube` has actually started successfully. The core reason is that 
> there is a bug in the `test_kubernetes_embedded_job.sh` script.  The error 
> message as follows:
>  !screenshot-1.png! 
> {code:java}
> Current check logic: echo ${status} | grep -q "minikube: Running cluster: 
> Running kubectl: Correctly Configured"
>  My local info
> jinchengsunjcs-iMac:flink-1.8.0 jincheng$ minikube status
> host: Running
> kubelet: Running
> apiserver: Running
> kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.101{code}
> So, I think we should improve the check logic of `minikube status`, What do 
> you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] asfgit closed pull request #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
asfgit closed pull request #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7952: [FLINK-11872][table-runtime-blink] update lz4 license file.

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #7952: [FLINK-11872][table-runtime-blink] 
update lz4 license file.
URL: https://github.com/apache/flink/pull/7952#issuecomment-471516150
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @twalthr [PMC], @zentol [PMC]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @twalthr [PMC], @zentol [PMC]
   * ❗ 3. Needs [attention] from.
   - Needs attention by @zentol [PMC]
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @twalthr [PMC], @zentol [PMC]
   * ✅ 5. Overall code [quality] is good.
   - Approved by @zentol [PMC]
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11972) The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR during the end-to-end test.

2019-03-20 Thread Yu Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797504#comment-16797504
 ] 

Yu Li commented on FLINK-11972:
---

There's another necessity to run the {{test_streaming_bucketing.sh}} case, that 
we must make sure to run \{{mvn install}} in the flink-end-to-end-tests 
directory, below are more detailed reason:

In {{test_streaming_bucketing.sh}} the command to submit job and get job id is 
like:
{noformat}
TEST_PROGRAM_JAR=${END_TO_END_DIR}/flink-bucketing-sink-test/target/BucketingSinkTestProgram.jar
...
JOB_ID=$($FLINK_DIR/bin/flink run -d -p 4 $TEST_PROGRAM_JAR -outputPath 
$TEST_DATA_DIR/out/result \
  | grep "Job has been submitted with JobID" | sed 's/.* //g')
{noformat}
And the {{TEST_PROGRAM_JAR}} need to be generated by install and won't be there 
by default. In this case, the result of the job submission command will be 
something like:
{noformat}
Could not build the program from JAR file.

Use the help option (-h or --help) to get help on the command.
{noformat}
Thus the grep will get nothing, so job id is empty and the script will hang at 
the {{wait_job_running}} phase, with some log like:
{noformat}
Job () is running.
Waiting for job () to have at least 5 completed checkpoints ...
{noformat}

Will add the notice to document, and also try to improve the script to do 
logging and fast fail if the target jar is missing.

> The classpath is missing the `flink-shaded-hadoop2-uber-2.8.3-1.8.0.jar` JAR 
> during the end-to-end test.
> 
>
> Key: FLINK-11972
> URL: https://issues.apache.org/jira/browse/FLINK-11972
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Priority: Major
>
> Since the difference between 1.8.0 and 1.7.x is that 1.8.x does not put the 
> `hadoop-shaded` JAR integrated into the dist.  It will cause an error when 
> the end-to-end test cannot be found with `Hadoop` Related classes,  such as: 
> `java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem`. So we 
> need to improve the end-to-end test script, or explicitly stated in the 
> README, i.e. end-to-end test need to add `flink-shaded-hadoop2-uber-.jar` 
> to the classpath. So, we will get the exception something like:
> {code:java}
> [INFO] 3 instance(s) of taskexecutor are already running on 
> jinchengsunjcs-iMac.local.
> Starting taskexecutor daemon on host jinchengsunjcs-iMac.local.
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/fs/FileSystem;
> at java.lang.Class.getDeclaredFields0(Native Method)
> at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
> at java.lang.Class.getDeclaredFields(Class.java:1916)
> at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
> at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1558)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:185)
> at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1227)
> at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:80)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:423)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FileSystem
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 22 more
> Job () is running.{code}
> So, I think we can import the test script or improve the README.
> What do you think?



--
Thi

[GitHub] [flink] flinkbot edited a comment on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-474882848
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @sunjincheng121 [committer]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @sunjincheng121 [committer]
   * ❓ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @sunjincheng121 [committer]
   * ✅ 5. Overall code [quality] is good.
   - Approved by @sunjincheng121 [committer]
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-475005137
 
 
   @flinkbot approve all


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-475004941
 
 
   Thanks for the quick fix @aljoscha!
   It works well from my local ENV(iMac)
   Merging...
   
   Best,
   Jincheng
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
azagrebin commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267514203
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -292,40 +392,31 @@ public void unregisterTask(Task task) {
LOG.debug("Unregister task {} from network environment (state: 
{}).",
task.getTaskInfo().getTaskNameWithSubtasks(), 
task.getExecutionState());
 
-   final ExecutionAttemptID executionId = task.getExecutionId();
-
synchronized (lock) {
if (isShutdown) {
// no need to do anything when we are not 
operational
return;
}
 
if (task.isCanceledOrFailed()) {
-   
resultPartitionManager.releasePartitionsProducedBy(executionId, 
task.getFailureCause());
+   
resultPartitionManager.releasePartitionsProducedBy(task.getExecutionId(), 
task.getFailureCause());
}
 
-   for (ResultPartition partition : 
task.getProducedPartitions()) {
+   for (ResultPartition partition : 
allPartitions.get(task.getTaskInfo().getTaskId())) {
 
 Review comment:
   If we had just execution id to index task partitions/gates, `unregisterTask` 
could just accept `ExecutionId` and cause in case of failure. The first log 
statement with task specifics could go outside, maybe. Just less dependency on 
Task class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
azagrebin commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267510295
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -88,6 +104,12 @@
 
private final boolean enableCreditBased;
 
+   private final boolean enableNetworkDetailedMetrics;
+
+   private final ConcurrentHashMap 
allPartitions;
 
 Review comment:
   resultPartitionManager already has "partition by execution id" mapping.
   why do we need to introduce extra `taskId`? task seems to be identifiable by 
execution id.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
azagrebin commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267511422
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -202,28 +233,97 @@ public TaskKvStateRegistry 
createKvStateTaskRegistry(JobID jobId, JobVertexID jo
return kvStateRegistry.createTaskRegistry(jobId, jobVertexId);
}
 
+   public ResultPartition[] getResultPartitionWriters(String taskId) {
 
 Review comment:
   will the ShuffleService also have the partition/gate getters then? it seems 
logical to let ShuffleService manage partitions/gates. Alternatively, task 
could still keep final arrays with partitions/gates. Then there will be less 
dependencies on ShuffleService interface. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
azagrebin commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267513101
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -202,28 +233,97 @@ public TaskKvStateRegistry 
createKvStateTaskRegistry(JobID jobId, JobVertexID jo
return kvStateRegistry.createTaskRegistry(jobId, jobVertexId);
}
 
+   public ResultPartition[] getResultPartitionWriters(String taskId) {
+   return allPartitions.get(taskId);
+   }
+
+   public SingleInputGate[] getInputGates(String taskId) {
+   return allInputGates.get(taskId);
+   }
+
// 

-   //  Task operations
+   //  Create Output Writers and Input Readers
// 

 
-   public void registerTask(Task task) throws IOException {
-   final ResultPartition[] producedPartitions = 
task.getProducedPartitions();
+   public ResultPartitionWriter[] createResultPartitionWriters(
+   String taskId,
+   JobID jobId,
+   ExecutionAttemptID executionId,
+   TaskActions taskActions,
+   ResultPartitionConsumableNotifier 
partitionConsumableNotifier,
+   TaskIOMetricGroup metrics,
+   Collection 
resultPartitionDeploymentDescriptors) throws IOException {
+
+   if (isShutdown) {
+   throw new IllegalStateException("NetworkEnvironment is 
shut down");
+   }
 
-   synchronized (lock) {
-   if (isShutdown) {
-   throw new 
IllegalStateException("NetworkEnvironment is shut down");
-   }
+   ResultPartition[] parititons = new 
ResultPartition[resultPartitionDeploymentDescriptors.size()];
+   int counter = 0;
+   for (ResultPartitionDeploymentDescriptor rpdd : 
resultPartitionDeploymentDescriptors) {
+   parititons[counter] = new ResultPartition(
+   taskId,
+   taskActions,
+   jobId,
+   new ResultPartitionID(rpdd.getPartitionId(), 
executionId),
+   rpdd.getPartitionType(),
+   rpdd.getNumberOfSubpartitions(),
+   rpdd.getMaxParallelism(),
+   resultPartitionManager,
+   partitionConsumableNotifier,
+   ioManager,
+   rpdd.sendScheduleOrUpdateConsumersMessage());
+
+   setupPartition(parititons[counter]);
+   counter++;
+   }
 
-   for (final ResultPartition partition : 
producedPartitions) {
-   setupPartition(partition);
+   allPartitions.put(taskId, parititons);
+
+   metrics.initializeOutputBufferMetrics(parititons);
 
 Review comment:
   Initially, we thought that ShuffleService would create separately 
partitions/gates.
   Now it seems that metrics and partition manager keep partitions/gates by 
task.
   It might change then ShuffleService interface.
   
   As one of steps, we should probably refactor out netty specifics from 
`TaskIOMetricGroup` and register those specific metrics inside netty 
ShuffleService.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-474882848
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @sunjincheng121 [committer]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @sunjincheng121 [committer]
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 edited a comment on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
sunjincheng121 edited a comment on issue #8024: [FLINK-11971] Fix kubernetes 
check in end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-474994662
 
 
   Thank you @aljoscha @dawidwys @hequn8128 I am checking the change now...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-474994662
 
 
   Thank you @aljoscha @dawidwys I am checking the change now...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
sunjincheng121 commented on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-474994359
 
 
   @flinkbot approve description
   @flinkbot approve consensus


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol merged pull request #7914: [FLINK-11786][travis] Merge cron branches into master

2019-03-20 Thread GitBox
zentol merged pull request #7914: [FLINK-11786][travis] Merge cron branches 
into master
URL: https://github.com/apache/flink/pull/7914
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Issue Comment Deleted] (FLINK-11984) StreamingFileSink docs do not mention S3 savepoint caveats.

2019-03-20 Thread Jamie Grier (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jamie Grier updated FLINK-11984:

Comment: was deleted

(was: [~kkl0u] What are the S3 savepoint caveats?)

> StreamingFileSink docs do not mention S3 savepoint caveats.
> ---
>
> Key: FLINK-11984
> URL: https://issues.apache.org/jira/browse/FLINK-11984
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Documentation
>Affects Versions: 1.7.2
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11984) StreamingFileSink docs do not mention S3 savepoint caveats.

2019-03-20 Thread Jamie Grier (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797404#comment-16797404
 ] 

Jamie Grier commented on FLINK-11984:
-

[~kkl0u] What are the S3 savepoint caveats?

> StreamingFileSink docs do not mention S3 savepoint caveats.
> ---
>
> Key: FLINK-11984
> URL: https://issues.apache.org/jira/browse/FLINK-11984
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Documentation
>Affects Versions: 1.7.2
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8025: [docs] fix typos in Task class

2019-03-20 Thread GitBox
flinkbot commented on issue #8025: [docs] fix typos in Task class
URL: https://github.com/apache/flink/pull/8025#issuecomment-474943155
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] linuxhadoop opened a new pull request #8025: [docs] fix typos in Task class

2019-03-20 Thread GitBox
linuxhadoop opened a new pull request #8025: [docs] fix typos in Task class
URL: https://github.com/apache/flink/pull/8025
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] npav commented on issue #3977: [FLINK-5490] Not allowing sinks to be cleared when getting the execut…

2019-03-20 Thread GitBox
npav commented on issue #3977: [FLINK-5490] Not allowing sinks to be cleared 
when getting the execut…
URL: https://github.com/apache/flink/pull/3977#issuecomment-474941597
 
 
   What's the status of this? I ran into the same issue and found out this bug 
independently as well. Is someone working on the requested test, or should I?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] npav edited a comment on issue #3977: [FLINK-5490] Not allowing sinks to be cleared when getting the execut…

2019-03-20 Thread GitBox
npav edited a comment on issue #3977: [FLINK-5490] Not allowing sinks to be 
cleared when getting the execut…
URL: https://github.com/apache/flink/pull/3977#issuecomment-474941597
 
 
   What's the status of this? I ran into the same issue and discovered this bug 
independently as well. Is someone working on the requested test, or should I?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-11986) Add micro benchmark for state operations

2019-03-20 Thread Yu Li (JIRA)
Yu Li created FLINK-11986:
-

 Summary: Add micro benchmark for state operations
 Key: FLINK-11986
 URL: https://issues.apache.org/jira/browse/FLINK-11986
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: Yu Li
Assignee: Yu Li


Currently in the 
[flink-benchmark|https://github.com/dataArtisans/flink-benchmarks] project we 
already have a JMH case for the whole backend, but none for finer grained state 
operations, and here we propose adding JMH cases for them, including (but not 
limited to):
 * ValueState
 ** testPut
 ** testGet
 * ListState
 ** testUpdate
 ** testGet
 ** testAddAll
 * MapState
 ** testPut
 ** testGet
 ** testContains
 ** testKeys
 ** testValues
 ** testEntries
 ** testIterator
 ** testRemove
 ** testPutAll

And we will create benchmark for {{HeapKeyedStateBackend}} and 
{{RocksDBKeyedStateBackend}} separately.

We believe these micro benchmarks could help locate where the regression comes 
from in case any observed from the backend benchmark, and we could also use 
these benchmarks to assure no performance downgrade when modifying relative 
codes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11986) Add micro benchmark for state operations

2019-03-20 Thread Yu Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-11986:
--
Component/s: Runtime / State Backends

> Add micro benchmark for state operations
> 
>
> Key: FLINK-11986
> URL: https://issues.apache.org/jira/browse/FLINK-11986
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends, Tests
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> Currently in the 
> [flink-benchmark|https://github.com/dataArtisans/flink-benchmarks] project we 
> already have a JMH case for the whole backend, but none for finer grained 
> state operations, and here we propose adding JMH cases for them, including 
> (but not limited to):
>  * ValueState
>  ** testPut
>  ** testGet
>  * ListState
>  ** testUpdate
>  ** testGet
>  ** testAddAll
>  * MapState
>  ** testPut
>  ** testGet
>  ** testContains
>  ** testKeys
>  ** testValues
>  ** testEntries
>  ** testIterator
>  ** testRemove
>  ** testPutAll
> And we will create benchmark for {{HeapKeyedStateBackend}} and 
> {{RocksDBKeyedStateBackend}} separately.
> We believe these micro benchmarks could help locate where the regression 
> comes from in case any observed from the backend benchmark, and we could also 
> use these benchmarks to assure no performance downgrade when modifying 
> relative codes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] dawidwys merged pull request #8017: [hotfix][table] Introduced UnresolvedFieldReference & ResolvedFieldReference expressions

2019-03-20 Thread GitBox
dawidwys merged pull request #8017:  [hotfix][table] Introduced 
UnresolvedFieldReference &  ResolvedFieldReference expressions
URL: https://github.com/apache/flink/pull/8017
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha commented on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
aljoscha commented on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-474905652
 
 
   cc @sunjincheng121 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-11965) Provide a REST API for flink to submit plain SQL flink jobs.

2019-03-20 Thread chenminghua (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797286#comment-16797286
 ] 

chenminghua edited comment on FLINK-11965 at 3/20/19 3:54 PM:
--

Timo Walther,Thank you for your attention!

 

my general idea of implementation is as follows:

By Extend “RestServerEndpoint” and “SimpleChannelInboundHandler” (which is in 
“Flink Runtime REST” module) realize the REST’s network layer.

using Antlr Implement a simple sql interpreter,the sql interpreter  interpretes 
The SQL statements  and generate “TableEntry”, “ViewEntry”, “dml”.

Add a method (“createEnvironmentInstance(List tables, 
List views, int parallelism)”) to “ExecutionContex” that implements 
registry “TableEntry” and “ViewEntry” to "TableEnvireonment”."REST Handler" 
registers "TableEntry" and "ViewEntry" generated by "sql interpreter" to 
"TableEnvireonment" with this new method.Then "REST Handler "updates 
"TableEnvireonment" with "DML SQL" and create "JobGraph" then submits the job 
to the Flink cluster with "ProgramDeployer".


was (Author: chenminghua):
Timo Walther,Thank you for your attention!

 

my general idea of implementation is as follows:

By Extend “RestServerEndpoint” and “SimpleChannelInboundHandler” (which is in 
“Flink Runtime REST” module) realize the REST’s network layer.

using Antlr Implement an sql interpreter,the sql interpreter  interpretes The 
SQL statements  and generate “TableEntry”, “ViewEntry”, “dml”.

Add a method (“createEnvironmentInstance(List tables, 
List views, int parallelism)”) to “ExecutionContex” that implements 
registry “TableEntry” and “ViewEntry” to "TableEnvireonment”."REST Handler" 
registers "TableEntry" and "ViewEntry" generated by "sql interpreter" to 
"TableEnvireonment" with this new method.Then "REST Handler "updates 
"TableEnvireonment" with "DML SQL" and create "JobGraph" then submits the job 
to the Flink cluster with "ProgramDeployer".

> Provide a REST API for flink to submit plain SQL flink jobs.
> 
>
> Key: FLINK-11965
> URL: https://issues.apache.org/jira/browse/FLINK-11965
> Project: Flink
>  Issue Type: Improvement
>  Components: SQL / Client
>Reporter: chenminghua
>Priority: Major
>
> SQL Client has been able to use Flink with SQL without the need to write Java 
> or Scala programs, avoiding tedious compilation and packaging work.However, 
> SQL Client is used in Command-Line Interface (CLI), which is not convenient 
> for remote application access. In addition, SQL Client must define 
> sourceTable and sinkTable in yaml file and submit a job to flink cluster for 
> each SQL statement processed.For a slightly more complex stream program is 
> not easy to achieve through SQL Client.Therefore, it is necessary to provide 
> a REST API for flink to submit plain SQL flink jobs.
> The SQL jobs REST API 
> include:/job/submit、/job/list、/job/stop、/job/restart。Where "/job/submit" is 
> used to submit the SQL job, and its parameters are passed through the JSON 
> request.The JSON request looks like this: {color:#FF}{"name":"myJob", 
> "statements":"CREATE TABLE userTable (user BIGINT, message VARCHAR) WITH(type 
> source, update-mode append, connector.type kafka, connector.version 0.10, 
> connector.topic test-user, connector.startup-mode earliest-offset, 
> connector.properties.0.key zookeeper.connect, connector.properties.0.value 
> zkHost:2181/kafka_2_1_0, connector.properties.1.key bootstrap.servers, 
> connector.properties.1.value kafkaHost:9099, format.type json, 
> format.fail-on-missing-field true, format.derive-schema true); CREATE TABLE 
> targetUserTable (user BIGINT, message VARCHAR) WITH(type sink, ……); CREATE 
> APPEND VIEW appView SELECT user, message FROM userTable group by user, 
> message; INSERT into targetUserTable SELECT user, message FROM 
> appView;”}{color}.name is the name of the job. statements  is the contents of 
> a job, which contains multiple SQL statements(Support create table…., create 
> view….., create append view….., insert into….. several types of SQL 
> statements).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11965) Provide a REST API for flink to submit plain SQL flink jobs.

2019-03-20 Thread chenminghua (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797286#comment-16797286
 ] 

chenminghua commented on FLINK-11965:
-

Timo Walther,Thank you for your attention!

 

my general idea of implementation is as follows:

By Extend “RestServerEndpoint” and “SimpleChannelInboundHandler” (which is in 
“Flink Runtime REST” module) realize the REST’s network layer.

using Antlr Implement an sql interpreter,the sql interpreter  interpretes The 
SQL statements  and generate “TableEntry”, “ViewEntry”, “dml”.

Add a method (“createEnvironmentInstance(List tables, 
List views, int parallelism)”) to “ExecutionContex” that implements 
registry “TableEntry” and “ViewEntry” to "TableEnvireonment”."REST Handler" 
registers "TableEntry" and "ViewEntry" generated by "sql interpreter" to 
"TableEnvireonment" with this new method.Then "REST Handler "updates 
"TableEnvireonment" with "DML SQL" and create "JobGraph" then submits the job 
to the Flink cluster with "ProgramDeployer".

> Provide a REST API for flink to submit plain SQL flink jobs.
> 
>
> Key: FLINK-11965
> URL: https://issues.apache.org/jira/browse/FLINK-11965
> Project: Flink
>  Issue Type: Improvement
>  Components: SQL / Client
>Reporter: chenminghua
>Priority: Major
>
> SQL Client has been able to use Flink with SQL without the need to write Java 
> or Scala programs, avoiding tedious compilation and packaging work.However, 
> SQL Client is used in Command-Line Interface (CLI), which is not convenient 
> for remote application access. In addition, SQL Client must define 
> sourceTable and sinkTable in yaml file and submit a job to flink cluster for 
> each SQL statement processed.For a slightly more complex stream program is 
> not easy to achieve through SQL Client.Therefore, it is necessary to provide 
> a REST API for flink to submit plain SQL flink jobs.
> The SQL jobs REST API 
> include:/job/submit、/job/list、/job/stop、/job/restart。Where "/job/submit" is 
> used to submit the SQL job, and its parameters are passed through the JSON 
> request.The JSON request looks like this: {color:#FF}{"name":"myJob", 
> "statements":"CREATE TABLE userTable (user BIGINT, message VARCHAR) WITH(type 
> source, update-mode append, connector.type kafka, connector.version 0.10, 
> connector.topic test-user, connector.startup-mode earliest-offset, 
> connector.properties.0.key zookeeper.connect, connector.properties.0.value 
> zkHost:2181/kafka_2_1_0, connector.properties.1.key bootstrap.servers, 
> connector.properties.1.value kafkaHost:9099, format.type json, 
> format.fail-on-missing-field true, format.derive-schema true); CREATE TABLE 
> targetUserTable (user BIGINT, message VARCHAR) WITH(type sink, ……); CREATE 
> APPEND VIEW appView SELECT user, message FROM userTable group by user, 
> message; INSERT into targetUserTable SELECT user, message FROM 
> appView;”}{color}.name is the name of the job. statements  is the contents of 
> a job, which contains multiple SQL statements(Support create table…., create 
> view….., create append view….., insert into….. several types of SQL 
> statements).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11971) Fix `Command: start_kubernetes_if_not_ruunning failed` error

2019-03-20 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797275#comment-16797275
 ] 

Hequn Cheng commented on FLINK-11971:
-

[~sunjincheng121] Thanks for looking into the problem and providing valuable 
information. I see two options to solve the problem:
 - Making the match condition looser. For example, we can only match `Running` 
keyword. However, this way we would also make the checking very fragile.
 - Add another match condition for the new status message. For example, we can 
change the original match logic(in test_kubernetes_embedded_job.sh) from
{code:java}
echo ${status} | grep -q "minikube: Running cluster: Running kubectl: 
Correctly Configured"

{code}
to
{code:java}
echo ${status} | grep -q "minikube: Running cluster: Running kubectl: 
Correctly Configured" \
|| echo ${status} | grep -q "host: Running kubelet: Running apiserver: 
Running kubectl: Correctly Configured"
{code}
Use `||` to connect the two conditions so that either one success can lead to 
success.

Personally, I prefer the second option.
 [~sunjincheng121] [~aljoscha] What do you think?

> Fix `Command: start_kubernetes_if_not_ruunning failed` error
> 
>
> Key: FLINK-11971
> URL: https://issues.apache.org/jira/browse/FLINK-11971
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I did the end-to-end test under Mac OS, I found the following two 
> problems:
>  1. The verification returned for different `minikube status` is not enough 
> for the robustness. The strings returned by different versions of different 
> platforms are different. the following misjudgment is caused:
>  When the `Command: start_kubernetes_if_not_ruunning failed` error occurs, 
> the `minikube` has actually started successfully. The core reason is that 
> there is a bug in the `test_kubernetes_embedded_job.sh` script.  The error 
> message as follows:
>  !screenshot-1.png! 
> {code:java}
> Current check logic: echo ${status} | grep -q "minikube: Running cluster: 
> Running kubectl: Correctly Configured"
>  My local info
> jinchengsunjcs-iMac:flink-1.8.0 jincheng$ minikube status
> host: Running
> kubelet: Running
> apiserver: Running
> kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.101{code}
> So, I think we should improve the check logic of `minikube status`, What do 
> you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] azagrebin commented on a change in pull request #7822: [FLINK-11726][network] Refactor the creation of ResultPartition and InputGate into NetworkEnvironment

2019-03-20 Thread GitBox
azagrebin commented on a change in pull request #7822: [FLINK-11726][network] 
Refactor the creation of ResultPartition and InputGate into NetworkEnvironment
URL: https://github.com/apache/flink/pull/7822#discussion_r267402197
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 ##
 @@ -106,28 +128,32 @@ public NetworkEnvironment(
new KvStateRegistry(),
null,
null,
+   new IOManagerAsync(),
IOManager.IOMode.SYNC,
partitionRequestInitialBackoff,
partitionRequestMaxBackoff,
networkBuffersPerChannel,
extraNetworkBuffersPerGate,
+   false,
enableCreditBased);
}
 
public NetworkEnvironment(
-   NetworkBufferPool networkBufferPool,
-   ConnectionManager connectionManager,
-   ResultPartitionManager resultPartitionManager,
-   TaskEventDispatcher taskEventDispatcher,
-   KvStateRegistry kvStateRegistry,
-   KvStateServer kvStateServer,
-   KvStateClientProxy kvStateClientProxy,
-   IOMode defaultIOMode,
-   int partitionRequestInitialBackoff,
-   int partitionRequestMaxBackoff,
-   int networkBuffersPerChannel,
-   int extraNetworkBuffersPerGate,
-   boolean enableCreditBased) {
+   NetworkBufferPool networkBufferPool,
+   ConnectionManager connectionManager,
+   ResultPartitionManager resultPartitionManager,
+   TaskEventDispatcher taskEventDispatcher,
+   KvStateRegistry kvStateRegistry,
+   KvStateServer kvStateServer,
+   KvStateClientProxy kvStateClientProxy,
+   IOManager ioManager,
+   IOMode defaultIOMode,
+   int partitionRequestInitialBackoff,
+   int partitionRequestMaxBackoff,
+   int networkBuffersPerChannel,
+   int extraNetworkBuffersPerGate,
+   boolean enableNetworkDetailedMetrics,
 
 Review comment:
   I think it is good idea. At the end, 
`ShuffleManager.createShuffleService(config)` will probably need to accept 
Flink config to allow users set specific options for certain shuffle service.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
flinkbot commented on issue #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024#issuecomment-474882848
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11971) Fix `Command: start_kubernetes_if_not_ruunning failed` error

2019-03-20 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11971:
---
Labels: pull-request-available  (was: )

> Fix `Command: start_kubernetes_if_not_ruunning failed` error
> 
>
> Key: FLINK-11971
> URL: https://issues.apache.org/jira/browse/FLINK-11971
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.8.0, 1.9.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png
>
>
> When I did the end-to-end test under Mac OS, I found the following two 
> problems:
>  1. The verification returned for different `minikube status` is not enough 
> for the robustness. The strings returned by different versions of different 
> platforms are different. the following misjudgment is caused:
>  When the `Command: start_kubernetes_if_not_ruunning failed` error occurs, 
> the `minikube` has actually started successfully. The core reason is that 
> there is a bug in the `test_kubernetes_embedded_job.sh` script.  The error 
> message as follows:
>  !screenshot-1.png! 
> {code:java}
> Current check logic: echo ${status} | grep -q "minikube: Running cluster: 
> Running kubectl: Correctly Configured"
>  My local info
> jinchengsunjcs-iMac:flink-1.8.0 jincheng$ minikube status
> host: Running
> kubelet: Running
> apiserver: Running
> kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.101{code}
> So, I think we should improve the check logic of `minikube status`, What do 
> you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] aljoscha opened a new pull request #8024: [FLINK-11971] Fix kubernetes check in end-to-end test

2019-03-20 Thread GitBox
aljoscha opened a new pull request #8024: [FLINK-11971] Fix kubernetes check in 
end-to-end test
URL: https://github.com/apache/flink/pull/8024
 
 
   @dawidwys You originally added this. Do you think this simpler check would 
work as well?
   
   @hequn8128 Sorry for taking this issue, but I quickly wanted to fix this 
before creating a new RC because I was running the end-to-end tests on my Mac.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-11431) Akka dependency not compatible with java 9 or above

2019-03-20 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-11431:


Assignee: Chesnay Schepler

> Akka dependency not compatible with java 9 or above
> ---
>
> Key: FLINK-11431
> URL: https://issues.apache.org/jira/browse/FLINK-11431
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Affects Versions: 1.7.1
>Reporter: Matthieu Bonneviot
>Assignee: Chesnay Schepler
>Priority: Minor
>
> {noformat}
> 2019-01-24 14:43:52,059 ERROR akka.remote.Remoting
>   - class [B cannot be cast to class [C ([B and [C are in module 
> java.base of loader 'bootstrap')
> java.lang.ClassCastException: class [B cannot be cast to class [C ([B and [C 
> are in module java.base of loader 'bootstrap')
>     at akka.remote.artery.FastHash$.ofString(LruBoundedCache.scala:18)
>     at 
> akka.remote.serialization.ActorRefResolveCache.hash(ActorRefResolveCache.scala:61)
>     at 
> akka.remote.serialization.ActorRefResolveCache.hash(ActorRefResolveCache.scala:55)
>     at 
> akka.remote.artery.LruBoundedCache.getOrCompute(LruBoundedCache.scala:110)
>     at 
> akka.remote.RemoteActorRefProvider.resolveActorRef(RemoteActorRefProvider.scala:403)
>     at akka.actor.SerializedActorRef.readResolve(ActorRef.scala:433)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> java.base/java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1250)
>     at 
> java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2096)
>     at 
> java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1594)Running
>  a jobmanager with java 11 fail with the following call stack:
> {noformat}
> Flink master is using akka 2.4.20.
> After some investigation, the error in akka comes from the following line:
> {code}
> def ofString(s: String): Int = {
> val chars = Unsafe.instance.getObject(s, 
> EnvelopeBuffer.StringValueFieldOffset).asInstanceOf[Array[Char]]
> {code}
> from java 9 it is now an array of byte. The akka code in the newer version is:
> {code}
> public static int fastHash(String str) {
>   ...
> if (isJavaVersion9Plus) {
> final byte[] chars = (byte[]) instance.getObject(str, 
> stringValueFieldOffset);
> ...
> } else {
> final char[] chars = (char[]) instance.getObject(str, 
> stringValueFieldOffset);
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11985) Remove ignored command line parameter from yarn_setup.md

2019-03-20 Thread Guowei Ma (JIRA)
Guowei Ma created FLINK-11985:
-

 Summary: Remove ignored command line parameter from yarn_setup.md
 Key: FLINK-11985
 URL: https://issues.apache.org/jira/browse/FLINK-11985
 Project: Flink
  Issue Type: Task
  Components: Documentation
Reporter: Guowei Ma


In "Flink Yarn Session" mode the command line parameter "-n, --container" has 
already been ignored. It would be more friendly for user that the documentation 
is consistent with the implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] azagrebin commented on issue #7938: [FLINK-10941] Keep slots which contain unconsumed result partitions (on top of #7186)

2019-03-20 Thread GitBox
azagrebin commented on issue #7938: [FLINK-10941] Keep slots which contain 
unconsumed result partitions (on top of #7186)
URL: https://github.com/apache/flink/pull/7938#issuecomment-474868838
 
 
   Thanks for review @zhijiangW and @QiLuo-BD !
   
   As discussed, I agree that consumer confirmation based on transport end 
would be more performant for certain long running batch consumers. This would 
require to handle `EndOfPartition` and response with `CancelPartitionPartition` 
in `RemoteInputChannel` earlier. Now consumer processes all data before sending 
this response although transport might have already ended.
   
   Let’s introduce this option later to avoid bigger changes, it’s a quick fix 
for older Flink versions anyways and we should ensure at least safety (waiting 
for consumption is done).
   
   I agree that `TASK_MANAGER_RELEASE_WHEN_RESULT_CONSUMED = false` (old 
behaviour) seems to be useless generally speaking. At the moment I added it 
more as a safety net for the users to fallback to the old behaviour if 
something does not work with the newer approach (this quick fix).
   The idea was to keep the option for some time and drop it later if we see it 
works or/and after shuffle refactoring where it will most certainly be useless.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunhaibotb commented on a change in pull request #7959: [FLINK-11876] Introduce new InputSelectable, BoundedOneInput and BoundedMultiInput interfaces

2019-03-20 Thread GitBox
sunhaibotb commented on a change in pull request #7959: [FLINK-11876] Introduce 
new InputSelectable, BoundedOneInput and BoundedMultiInput interfaces
URL: https://github.com/apache/flink/pull/7959#discussion_r267379312
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InputIdentifier.java
 ##
 @@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators;
+
+import org.apache.flink.annotation.PublicEvolving;
+
+/**
+ * The identifier for the input(s) of the operator. It is numbered starting 
from 1, and
+ * 1 indicates the first input. With one exception, -1 indicates all inputs.
+ */
+@PublicEvolving
+public final class InputIdentifier {
+
+   /**
+* The {@code InputIdentifier} object corresponding to the input id 
{@code -1}.
+*/
+   public static final InputIdentifier ALL = new InputIdentifier(-1);
+
+   /**
+* The {@code InputIdentifier} object corresponding to the input id 
{@code 1}.
+*/
+   public static final InputIdentifier FIRST = new InputIdentifier(1);
+
+   /**
+* The {@code InputIdentifier} object corresponding to the input id 
{@code 2}.
+*/
+   public static final InputIdentifier SECOND = new InputIdentifier(2);
+
+   private final int inputId;
+
+   private InputIdentifier(int inputId) {
 
 Review comment:
   I agree. The builder class would be provided in the updated code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-11825) Resolve name clash of StateTTL TimeCharacteristic class

2019-03-20 Thread Aljoscha Krettek (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-11825.

Resolution: Fixed

Fixed on master in
6a66f7acf370e12ad65ee24293ed47d2c5db225c

> Resolve name clash of StateTTL TimeCharacteristic class
> ---
>
> Key: FLINK-11825
> URL: https://issues.apache.org/jira/browse/FLINK-11825
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.7.2, 1.8.1
>Reporter: Fabian Hueske
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The StateTTL feature introduced the class 
> \{{org.apache.flink.api.common.state.TimeCharacteristic}} which clashes with 
> \{{org.apache.flink.streaming.api.TimeCharacteristic}}. 
> This is a problem for two reasons:
> 1. Users get confused because the mistakenly import 
> \{{org.apache.flink.api.common.state.TimeCharacteristic}}.
> 2. When using the StateTTL feature, users need to spell out the package name 
> for \{{org.apache.flink.api.common.state.TimeCharacteristic}} because the 
> other class is most likely already imported.
> Since \{{org.apache.flink.streaming.api.TimeCharacteristic}} is one of the 
> most used classes of the DataStream API, we should make sure that users can 
> use it without import problems.
> These error are hard to spot and confusing for many users. 
> I see two ways to resolve the issue:
> 1. drop \{{org.apache.flink.api.common.state.TimeCharacteristic}} and use 
> \{{org.apache.flink.streaming.api.TimeCharacteristic}} throwing an exception 
> if an incorrect characteristic is used.
> 2. rename the class \{{org.apache.flink.api.common.state.TimeCharacteristic}} 
> to some other name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] aljoscha commented on issue #7921: [FLINK-11825][StateBackends] Resolve name clash of StateTTL TimeCharacteristic class

2019-03-20 Thread GitBox
aljoscha commented on issue #7921: [FLINK-11825][StateBackends] Resolve name 
clash of StateTTL TimeCharacteristic class
URL: https://github.com/apache/flink/pull/7921#issuecomment-474862257
 
 
   Thanks for your contribution! I merged this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha closed pull request #7921: [FLINK-11825][StateBackends] Resolve name clash of StateTTL TimeCharacteristic class

2019-03-20 Thread GitBox
aljoscha closed pull request #7921: [FLINK-11825][StateBackends] Resolve name 
clash of StateTTL TimeCharacteristic class
URL: https://github.com/apache/flink/pull/7921
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-11985) Remove ignored command line parameter from yarn_setup.md

2019-03-20 Thread Guowei Ma (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma reassigned FLINK-11985:
-

Assignee: Guowei Ma

> Remove ignored command line parameter from yarn_setup.md
> 
>
> Key: FLINK-11985
> URL: https://issues.apache.org/jira/browse/FLINK-11985
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Trivial
>
> In "Flink Yarn Session" mode the command line parameter "-n, --container" has 
> already been ignored. It would be more friendly for user that the 
> documentation is consistent with the implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot edited a comment on issue #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #7978: [FLINK-11910] [Yarn] add customizable 
yarn application type
URL: https://github.com/apache/flink/pull/7978#issuecomment-472607121
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @rmetzger [PMC], @suez1224 [committer]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @rmetzger [PMC]
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on issue #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-20 Thread GitBox
rmetzger commented on issue #7978: [FLINK-11910] [Yarn] add customizable yarn 
application type
URL: https://github.com/apache/flink/pull/7978#issuecomment-474857889
 
 
   @suez1224 I'm sorry that the @flinkbot didn't know that you are a committer 
of Flink.
   I've updated the config and restarted the bot. It should soon show in the 
tracking comment that you are a committer!
   
   Regarding the PR: I agree that we should make the application type 
overridable entirely, not just the version! (this is also more intuitive from a 
configuration point of view)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7980: [FLINK-11913] Shadding cassandra driver dependencies in cassandra conector

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #7980: [FLINK-11913] Shadding cassandra 
driver dependencies in cassandra conector
URL: https://github.com/apache/flink/pull/7980#issuecomment-472641520
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❗ 3. Needs [attention] from.
   - Needs attention by @zentol [PMC]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #7978: [FLINK-11910] [Yarn] add customizable 
yarn application type
URL: https://github.com/apache/flink/pull/7978#issuecomment-472607121
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @rmetzger [PMC], @suez1224
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @rmetzger [PMC]
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on issue #7978: [FLINK-11910] [Yarn] add customizable yarn application type

2019-03-20 Thread GitBox
rmetzger commented on issue #7978: [FLINK-11910] [Yarn] add customizable yarn 
application type
URL: https://github.com/apache/flink/pull/7978#issuecomment-474850989
 
 
   @flinkbot approve description consensus


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11910) Make Yarn Application Type Customizable with Flink Version

2019-03-20 Thread Robert Metzger (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16797209#comment-16797209
 ] 

Robert Metzger commented on FLINK-11910:


+1 I think this is a valid feature request.

> Make Yarn Application Type Customizable with Flink Version
> --
>
> Key: FLINK-11910
> URL: https://issues.apache.org/jira/browse/FLINK-11910
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.6.3, 1.6.4, 1.7.2
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: Screen Shot 2019-03-14 at 8.17.18 AM.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Internally, our organization support multiple version of Flink in production. 
> It will be more convenient for us to distinguish different version of jobs by 
> using the Application Type. 
> The simple solution is let user to use dynamic properties to set 
> "flink-version". If the property is set, we add it as suffix of "Apache 
> Flink" by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot edited a comment on issue #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #7927: [FLINK-11603][metrics] Port the 
MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#issuecomment-470507137
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @zentol [PMC]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @zentol [PMC]
   * ❗ 3. Needs [attention] from.
   - Needs attention by @tillrohrmann [PMC]
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on issue #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on issue #7927: [FLINK-11603][metrics] Port the 
MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#issuecomment-474849580
 
 
   @flinkbot approve-until consensus


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on issue #7980: [FLINK-11913] Shadding cassandra driver dependencies in cassandra conector

2019-03-20 Thread GitBox
rmetzger commented on issue #7980: [FLINK-11913] Shadding cassandra driver 
dependencies in cassandra conector
URL: https://github.com/apache/flink/pull/7980#issuecomment-474849469
 
 
   @flinkbot attention @zentol 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267351817
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/rpc/akka/AkkaRpcServiceUtils.java
 ##
 @@ -100,11 +100,11 @@ public static RpcService createRpcService(
int port,
Configuration configuration) throws Exception {
final ActorSystem actorSystem = 
BootstrapTools.startActorSystem(configuration, hostname, port, LOG);
-   return instantiateAkkaRpcService(configuration, actorSystem);
+   return createRpcService(configuration, actorSystem);
}
 
@Nonnull
-   private static RpcService instantiateAkkaRpcService(Configuration 
configuration, ActorSystem actorSystem) {
+   public static RpcService createRpcService(Configuration configuration, 
ActorSystem actorSystem) {
 
 Review comment:
   Given that JM/TM have configurable host names It is unlikely that the MQS 
can get away without offering the same. Realistically these should us the same 
hostname as the dispatcher/TM RpcService.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267356838
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/dump/MetricQueryService.java
 ##
 @@ -27,31 +28,28 @@
 import org.apache.flink.metrics.Metric;
 import org.apache.flink.runtime.clusterframework.types.ResourceID;
 import org.apache.flink.runtime.metrics.groups.AbstractMetricGroup;
+import org.apache.flink.runtime.rpc.RpcEndpoint;
+import org.apache.flink.runtime.rpc.RpcService;
 
-import akka.actor.ActorRef;
-import akka.actor.ActorSystem;
-import akka.actor.Props;
-import akka.actor.Status;
-import akka.actor.UntypedActor;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.io.IOException;
 import java.io.Serializable;
 import java.util.HashMap;
 import java.util.Map;
+import java.util.concurrent.CompletableFuture;
 
 import static 
org.apache.flink.runtime.metrics.dump.MetricDumpSerialization.MetricDumpSerializer;
 
 /**
  * The MetricQueryService creates a key-value representation of all metrics 
currently registered with Flink when queried.
  *
  * It is realized as an actor and can be notified of
- * - an added metric by calling {@link 
MetricQueryService#notifyOfAddedMetric(ActorRef, Metric, String, 
AbstractMetricGroup)}
- * - a removed metric by calling {@link 
MetricQueryService#notifyOfRemovedMetric(ActorRef, Metric)}
- * - a metric dump request by sending the return value of {@link 
MetricQueryService#getCreateDump()}
+ * - an added metric by calling {@link #addMetric(String, Metric, 
AbstractMetricGroup)}
+ * - a removed metric by calling {@link #removeMetric(Metric)}
+ * - a metric dump request by sending the return value of {@link 
#queryMetrics(Time)}
 
 Review comment:
   this in incorrect


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267349122
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/dump/MetricQueryService.java
 ##
 @@ -73,60 +71,50 @@ public String filterCharacters(String input) {
 
private final long messageSizeLimit;
 
-   public MetricQueryService(long messageSizeLimit) {
+   public MetricQueryService(RpcService rpcService, String endpointId, 
long messageSizeLimit) {
+   super(rpcService, endpointId);
this.messageSizeLimit = messageSizeLimit;
}
 
@Override
-   public void postStop() {
+   public CompletableFuture onStop() {
serializer.close();
+   return CompletableFuture.completedFuture(null);
}
 
-   @Override
-   public void onReceive(Object message) {
-   try {
-   if (message instanceof AddMetric) {
-   AddMetric added = (AddMetric) message;
-
-   String metricName = added.metricName;
-   Metric metric = added.metric;
-   AbstractMetricGroup group = added.group;
-
-   QueryScopeInfo info = 
group.getQueryServiceMetricInfo(FILTER);
-
-   if (metric instanceof Counter) {
-   counters.put((Counter) metric, new 
Tuple2<>(info, FILTER.filterCharacters(metricName)));
-   } else if (metric instanceof Gauge) {
-   gauges.put((Gauge) metric, new 
Tuple2<>(info, FILTER.filterCharacters(metricName)));
-   } else if (metric instanceof Histogram) {
-   histograms.put((Histogram) metric, new 
Tuple2<>(info, FILTER.filterCharacters(metricName)));
-   } else if (metric instanceof Meter) {
-   meters.put((Meter) metric, new 
Tuple2<>(info, FILTER.filterCharacters(metricName)));
-   }
-   } else if (message instanceof RemoveMetric) {
-   Metric metric = (((RemoveMetric) 
message).metric);
-   if (metric instanceof Counter) {
-   this.counters.remove(metric);
-   } else if (metric instanceof Gauge) {
-   this.gauges.remove(metric);
-   } else if (metric instanceof Histogram) {
-   this.histograms.remove(metric);
-   } else if (metric instanceof Meter) {
-   this.meters.remove(metric);
-   }
-   } else if (message instanceof CreateDump) {
-   
MetricDumpSerialization.MetricSerializationResult dump = 
serializer.serialize(counters, gauges, histograms, meters);
-
-   dump = enforceSizeLimit(dump);
-
-   getSender().tell(dump, getSelf());
-   } else {
-   LOG.warn("MetricQueryServiceActor received an 
invalid message. " + message.toString());
-   getSender().tell(new Status.Failure(new 
IOException("MetricQueryServiceActor received an invalid message. " + 
message.toString())), getSelf());
+   public void addMetric(String metricName, Metric metric, 
AbstractMetricGroup group) {
+   runAsync(() -> {
 
 Review comment:
   I'd prefer registering them right away, this seems like unnecessary overhead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267357099
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/dump/MetricQueryService.java
 ##
 @@ -73,60 +71,50 @@ public String filterCharacters(String input) {
 
private final long messageSizeLimit;
 
-   public MetricQueryService(long messageSizeLimit) {
+   public MetricQueryService(RpcService rpcService, String endpointId, 
long messageSizeLimit) {
+   super(rpcService, endpointId);
this.messageSizeLimit = messageSizeLimit;
}
 
@Override
-   public void postStop() {
+   public CompletableFuture onStop() {
serializer.close();
+   return CompletableFuture.completedFuture(null);
}
 
-   @Override
-   public void onReceive(Object message) {
-   try {
 
 Review comment:
   how does the stack trace look like if any of the methods throw an exception?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267353233
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/metrics/MetricRegistryImplTest.java
 ##
 @@ -369,21 +367,17 @@ public void testQueryActorShutdown() throws Exception {
 
MetricRegistryImpl registry = new 
MetricRegistryImpl(MetricRegistryConfiguration.defaultMetricRegistryConfiguration());
 
-   final ActorSystem actorSystem = 
AkkaUtils.createDefaultActorSystem();
+   final RpcService rpcService = new TestingRpcService();
 
-   registry.startQueryService(actorSystem, null);
+   registry.startQueryService(rpcService, null);
 
-   ActorRef queryServiceActor = registry.getQueryService();
+   MetricQueryService queryService = 
checkNotNull(registry.getQueryService());
 
registry.shutdown().get();
 
-   try {
-   
Await.result(actorSystem.actorSelection(queryServiceActor.path()).resolveOne(timeout),
 timeout);
+   queryService.getTerminationFuture().get(timeout.toMillis(), 
TimeUnit.MILLISECONDS);
 
-   fail("The query actor should be terminated resulting in 
a ActorNotFound exception.");
-   } catch (ActorNotFound e) {
-   // we expect the query actor to be shut down
-   }
+   rpcService.stopService();
 
 Review comment:
   Isn't the registry supposed to shut this down? in any case you likely wanna 
do this in a finally block.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267352698
 
 

 ##
 File path: 
flink-runtime/src/main/scala/org/apache/flink/runtime/taskmanager/TaskManager.scala
 ##
 @@ -1846,7 +1846,7 @@ object TaskManager {
 val metricRegistry = new MetricRegistryImpl(
   MetricRegistryConfiguration.fromConfiguration(configuration))
 
-metricRegistry.startQueryService(taskManagerSystem, resourceID)
+//metricRegistry.startQueryService(taskManagerSystem, resourceID)
 
 Review comment:
   remove the line instead


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267354546
 
 

 ##
 File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/webmonitor/TestingRestfulGateway.java
 ##
 @@ -269,13 +270,13 @@ public Builder 
setRequestClusterOverviewSupplier(Supplier>>
 requestMetricQueryServicePathsSupplier) {
-   this.requestMetricQueryServicePathsSupplier = 
requestMetricQueryServicePathsSupplier;
+   public Builder 
setRequestMetricQueryServiceGatewaysSupplier(Supplier>>
 requestMetricQueryServiceGatewaysSupplier) {
+   this.requestMetricQueryServiceGatewaysSupplier = 
requestMetricQueryServiceGatewaysSupplier;
return this;
}
 
-   public Builder 
setRequestTaskManagerMetricQueryServicePathsSupplier(Supplier>>> requestTaskManagerMetricQueryServicePathsSupplier) {
-   this.requestTaskManagerMetricQueryServicePathsSupplier 
= requestTaskManagerMetricQueryServicePathsSupplier;
+   public Builder 
setRequestTaskManagerMetricQueryServiceGatewaysSupplier(Supplier>>> 
requestTaskManagerMetricQueryServiceGatewaysSupplier) {
 
 Review comment:
   typo: remove s in `Gateways`, also applies to field and  
´requestTaskManagerMetricQueryServiceGateways`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267357433
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/dump/MetricQueryService.java
 ##
 @@ -217,7 +205,7 @@ private void logDumpSizeWouldExceedLimit(final String 
metricType, boolean hasExc
 * {@code space : . ,} are replaced by {@code _} 
(underscore)
 * 
 */
-   static String replaceInvalidChars(String str) {
+   private static String replaceInvalidChars(String str) {
 
 Review comment:
   unrelated, please revert


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger merged pull request #8013: Minor example typo fix

2019-03-20 Thread GitBox
rmetzger merged pull request #8013: Minor example typo fix
URL: https://github.com/apache/flink/pull/8013
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8013: Minor example typo fix

2019-03-20 Thread GitBox
flinkbot edited a comment on issue #8013: Minor example typo fix
URL: https://github.com/apache/flink/pull/8013#issuecomment-474627783
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @rmetzger [PMC]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @rmetzger [PMC]
   * ❓ 3. Needs [attention] from.
   * ✅ 4. The change fits into the overall [architecture].
   - Approved by @rmetzger [PMC]
   * ✅ 5. Overall code [quality] is good.
   - Approved by @rmetzger [PMC]
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on issue #8013: Minor example typo fix

2019-03-20 Thread GitBox
rmetzger commented on issue #8013: Minor example typo fix
URL: https://github.com/apache/flink/pull/8013#issuecomment-474847205
 
 
   @flinkbot approve all
   
   Thanks a lot for the PR. I'll merge it right away!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port the MetricQueryService to the new RpcEndpoint

2019-03-20 Thread GitBox
zentol commented on a change in pull request #7927: [FLINK-11603][metrics] Port 
the MetricQueryService to the new RpcEndpoint
URL: https://github.com/apache/flink/pull/7927#discussion_r267347589
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/metrics/MetricRegistry.java
 ##
 @@ -76,10 +77,12 @@
ScopeFormats getScopeFormats();
 
/**
-* Returns the path of the {@link MetricQueryService} or null, if none 
is started.
+* Returns the gateway of the {@link MetricQueryService} or null, if 
none is started.
 *
-* @return Path of the MetricQueryService or null, if none is started
+* @return Gateway of the MetricQueryService or null, if none is started
 */
@Nullable
-   String getMetricQueryServicePath();
+   default MetricQueryServiceGateway getMetricQueryServiceGateway() {
+   return null;
 
 Review comment:
   seems unnecessary to have this as a default method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] vthinkxie commented on issue #8016: [FLINK-10705][web]: Rework Flink Web Dashboard

2019-03-20 Thread GitBox
vthinkxie commented on issue #8016: [FLINK-10705][web]: Rework Flink Web 
Dashboard
URL: https://github.com/apache/flink/pull/8016#issuecomment-474846099
 
 
   Hi @rmetzger 
   Thanks a lot for your comments!
   
   > Problems I encountered:
   > 
   > * Scrolling in this area I marked blue leads to zooming. I understand why 
that is, but it was not very intuitive for me as a user. Do you have any ideas 
how to improve this behavior?
   > 
   > https://user-images.githubusercontent.com/89049/54687718-8855c280-4b1c-11e9-8ea3-f227b5ee3825.png";>
   > 
   
   
   
   The zoom behavior can be disabled by default, I will update this part later 
:)
   
   
   
   
   > * Starting a job with invalid parameters leads to silent errors (they are 
not reported back to the user)
   > 
   > https://user-images.githubusercontent.com/89049/54687950-0b771880-4b1d-11e9-95e7-7b2ded4800d6.png";>
   > 
   
   
   
   The error message will display the right top, all error messages from the 
server will display in the right top, as you mentioned in the last question.
   
   
   
   > Open questions:
   > 
   > 1. I assume the old interface has been removed in this PR?
   >(Your first PR had the option to choose between the old UI (`/` and the 
new UI `/new`)).
   >I'm undecided whether it makes sense to go with this dual approach.
   >If we keep the old one, I fear that the new UI won't be used. Going 
with the new thing might be "risky", if something breaks.
   
   
   Yes, I remove the old UI because it will be harder to maintain both the old 
and the new one at the same time, and it will become more complex if anyone 
want to add some features to the web.
   I think the new UI has covers all features now, If I miss or breaks 
something, please let me know :)
   
   
   > 2. What are the "Messages" at the top for?
   
   
   It is an error message collector, any error message return from the server 
will display in the box now, not only when user submit a job.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-11980) Improve efficiency of iterating KeySelectionListener on notification

2019-03-20 Thread Stefan Richter (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Richter resolved FLINK-11980.

   Resolution: Fixed
Fix Version/s: 1.9.0
   1.8.0

Merged in:
master: 284e4486e5
release-1.8: ae91fd3bb5

> Improve efficiency of iterating KeySelectionListener on notification
> 
>
> Key: FLINK-11980
> URL: https://issues.apache.org/jira/browse/FLINK-11980
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.8.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.8.0, 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{KeySelectionListener}} was introduced for incremental TTL state cleanup as 
> a driver of the cleanup process. Listeners are notified whenever the current 
> key in the backend is set (i.e. for every event). The current implementation 
> of the collection that holds the listener is a {{HashSet}}, iterated via 
> `forEach` on each key change. This method comes with the overhead of creating 
> temporaray objects, e.g. iterators, on every invocation and even if there is 
> no listener registered. We should rather use an {{ArrayList}} with for-loop 
> iteration in this hot code path to i) minimize overhead and ii) minimize 
> costs for the very likely case that there is no listener at all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >