[GitHub] spark pull request #17613: [SPARK-20301][FLAKY-TEST][DO NOT MERGE] Fix Hadoo...

2017-04-11 Thread brkyvz
Github user brkyvz commented on a diff in the pull request:

https://github.com/apache/spark/pull/17613#discussion_r111067826
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
 ---
@@ -284,42 +284,38 @@ class StreamExecution(
 triggerExecutor.execute(() => {
   startTrigger()
 
-  val continueToRun =
-if (isActive) {
-  reportTimeTaken("triggerExecution") {
-if (currentBatchId < 0) {
-  // We'll do this initialization only once
-  populateStartOffsets(sparkSessionToRunBatches)
-  logDebug(s"Stream running from $committedOffsets to 
$availableOffsets")
-} else {
-  constructNextBatch()
-}
-if (dataAvailable) {
-  currentStatus = currentStatus.copy(isDataAvailable = 
true)
-  updateStatusMessage("Processing new data")
-  runBatch(sparkSessionToRunBatches)
-}
+  if (isActive) {
+reportTimeTaken("triggerExecution") {
+  if (currentBatchId < 0) {
+// We'll do this initialization only once
+populateStartOffsets(sparkSessionToRunBatches)
+logDebug(s"Stream running from $committedOffsets to 
$availableOffsets")
+  } else {
+constructNextBatch()
   }
-  // Report trigger as finished and construct progress object.
-  finishTrigger(dataAvailable)
   if (dataAvailable) {
-// Update committed offsets.
-batchCommitLog.add(currentBatchId)
-committedOffsets ++= availableOffsets
-logDebug(s"batch ${currentBatchId} committed")
-// We'll increase currentBatchId after we complete 
processing current batch's data
-currentBatchId += 1
-  } else {
-currentStatus = currentStatus.copy(isDataAvailable = false)
-updateStatusMessage("Waiting for data to arrive")
-Thread.sleep(pollingDelayMs)
+currentStatus = currentStatus.copy(isDataAvailable = true)
+updateStatusMessage("Processing new data")
+runBatch(sparkSessionToRunBatches)
   }
-  true
+}
+// Report trigger as finished and construct progress object.
+finishTrigger(dataAvailable)
--- End diff --

I don't think I moved it out. Is the diff and whitespace confusing?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17613: [SPARK-20301][FLAKY-TEST][DO NOT MERGE] Fix Hadoo...

2017-04-11 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/17613#discussion_r111058990
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamTest.scala ---
@@ -277,6 +277,11 @@ trait StreamTest extends QueryTest with 
SharedSQLContext with Timeouts {
 
 def threadState =
   if (currentStream != null && currentStream.microBatchThread.isAlive) 
"alive" else "dead"
+def threadStackTrace = if (currentStream != null && 
currentStream.microBatchThread.isAlive) {
--- End diff --

+1 on keeping this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17613: [SPARK-20301][FLAKY-TEST][DO NOT MERGE] Fix Hadoo...

2017-04-11 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/17613#discussion_r111058917
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
 ---
@@ -284,42 +284,38 @@ class StreamExecution(
 triggerExecutor.execute(() => {
   startTrigger()
 
-  val continueToRun =
-if (isActive) {
-  reportTimeTaken("triggerExecution") {
-if (currentBatchId < 0) {
-  // We'll do this initialization only once
-  populateStartOffsets(sparkSessionToRunBatches)
-  logDebug(s"Stream running from $committedOffsets to 
$availableOffsets")
-} else {
-  constructNextBatch()
-}
-if (dataAvailable) {
-  currentStatus = currentStatus.copy(isDataAvailable = 
true)
-  updateStatusMessage("Processing new data")
-  runBatch(sparkSessionToRunBatches)
-}
+  if (isActive) {
+reportTimeTaken("triggerExecution") {
+  if (currentBatchId < 0) {
+// We'll do this initialization only once
+populateStartOffsets(sparkSessionToRunBatches)
+logDebug(s"Stream running from $committedOffsets to 
$availableOffsets")
+  } else {
+constructNextBatch()
   }
-  // Report trigger as finished and construct progress object.
-  finishTrigger(dataAvailable)
   if (dataAvailable) {
-// Update committed offsets.
-batchCommitLog.add(currentBatchId)
-committedOffsets ++= availableOffsets
-logDebug(s"batch ${currentBatchId} committed")
-// We'll increase currentBatchId after we complete 
processing current batch's data
-currentBatchId += 1
-  } else {
-currentStatus = currentStatus.copy(isDataAvailable = false)
-updateStatusMessage("Waiting for data to arrive")
-Thread.sleep(pollingDelayMs)
+currentStatus = currentStatus.copy(isDataAvailable = true)
+updateStatusMessage("Processing new data")
+runBatch(sparkSessionToRunBatches)
   }
-  true
+}
+// Report trigger as finished and construct progress object.
+finishTrigger(dataAvailable)
--- End diff --

why did you move this out of the `reportTimeTaken { ... }`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17613: [SPARK-20301][FLAKY-TEST][DO NOT MERGE] Fix Hadoo...

2017-04-11 Thread brkyvz
GitHub user brkyvz opened a pull request:

https://github.com/apache/spark/pull/17613

[SPARK-20301][FLAKY-TEST][DO NOT MERGE] Fix Hadoop Shell.runCommand 
flakiness in Structured Streaming tests

## What changes were proposed in this pull request?

Some Structured Streaming tests show flakiness

## How was this patch tested?

Thousand retries locally and Jenkins of the flaky tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/brkyvz/spark flaky-stream-agg

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/17613.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #17613


commit c060e6b1b811f1e55d4ac0becf38683cfc1fe536
Author: Burak Yavuz 
Date:   2017-04-12T02:48:39Z

ready for jenkins




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org