[GitHub] spark pull request #15166: [SPARK-17513][SQL] Make StreamExecution garbage-c...

2016-09-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/15166


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15166: [SPARK-17513][SQL] Make StreamExecution garbage-c...

2016-09-20 Thread frreiss
Github user frreiss commented on a diff in the pull request:

https://github.com/apache/spark/pull/15166#discussion_r79730904
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
 ---
@@ -125,6 +125,30 @@ class StreamingQuerySuite extends StreamTest with 
BeforeAndAfter {
 )
   }
 
+  testQuietly("StreamExecution metadata garbage collection") {
+val inputData = MemoryStream[Int]
+val mapped = inputData.toDS().map(6 / _)
+
+// Run 3 batches, and then assert that only 1 metadata file is left at 
the end
+// since the first 2 should have been purged.
+testStream(mapped)(
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3),
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3, 6, 3),
+  AddData(inputData, 4, 6),
+  CheckAnswer(6, 3, 6, 3, 1, 1),
+
+  AssertOnQuery("metadata log should contain only one file") { q =>
+val metadataLogDir = new 
java.io.File(q.offsetLog.metadataPath.toString)
+val logFileNames = 
metadataLogDir.listFiles().toSeq.map(_.getName())
+val toTest = logFileNames.filter(! _.endsWith(".crc"))  // 
Workaround for SPARK-17475
--- End diff --

Either way is fine with me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15166: [SPARK-17513][SQL] Make StreamExecution garbage-c...

2016-09-20 Thread frreiss
Github user frreiss commented on a diff in the pull request:

https://github.com/apache/spark/pull/15166#discussion_r79730664
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
 ---
@@ -125,6 +125,30 @@ class StreamingQuerySuite extends StreamTest with 
BeforeAndAfter {
 )
   }
 
+  testQuietly("StreamExecution metadata garbage collection") {
+val inputData = MemoryStream[Int]
+val mapped = inputData.toDS().map(6 / _)
+
+// Run 3 batches, and then assert that only 1 metadata file is left at 
the end
+// since the first 2 should have been purged.
+testStream(mapped)(
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3),
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3, 6, 3),
+  AddData(inputData, 4, 6),
+  CheckAnswer(6, 3, 6, 3, 1, 1),
+
+  AssertOnQuery("metadata log should contain only one file") { q =>
+val metadataLogDir = new 
java.io.File(q.offsetLog.metadataPath.toString)
+val logFileNames = 
metadataLogDir.listFiles().toSeq.map(_.getName())
+val toTest = logFileNames.filter(! _.endsWith(".crc"))  // 
Workaround for SPARK-17475
+assert(toTest.size == 1 && toTest.head == "2")
+true
--- End diff --

Ah, yes. The previous like (146) should be just `toTest.size == 1 && 
toTest.head == "2"`, with no "assert".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15166: [SPARK-17513][SQL] Make StreamExecution garbage-c...

2016-09-20 Thread petermaxlee
Github user petermaxlee commented on a diff in the pull request:

https://github.com/apache/spark/pull/15166#discussion_r79730198
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
 ---
@@ -125,6 +125,30 @@ class StreamingQuerySuite extends StreamTest with 
BeforeAndAfter {
 )
   }
 
+  testQuietly("StreamExecution metadata garbage collection") {
+val inputData = MemoryStream[Int]
+val mapped = inputData.toDS().map(6 / _)
+
+// Run 3 batches, and then assert that only 1 metadata file is left at 
the end
+// since the first 2 should have been purged.
+testStream(mapped)(
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3),
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3, 6, 3),
+  AddData(inputData, 4, 6),
+  CheckAnswer(6, 3, 6, 3, 1, 1),
+
+  AssertOnQuery("metadata log should contain only one file") { q =>
+val metadataLogDir = new 
java.io.File(q.offsetLog.metadataPath.toString)
+val logFileNames = 
metadataLogDir.listFiles().toSeq.map(_.getName())
+val toTest = logFileNames.filter(! _.endsWith(".crc"))  // 
Workaround for SPARK-17475
--- End diff --

I think @frreiss added this to be more obvious. I don't really have a 
preference here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15166: [SPARK-17513][SQL] Make StreamExecution garbage-c...

2016-09-20 Thread petermaxlee
Github user petermaxlee commented on a diff in the pull request:

https://github.com/apache/spark/pull/15166#discussion_r79730104
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
 ---
@@ -125,6 +125,30 @@ class StreamingQuerySuite extends StreamTest with 
BeforeAndAfter {
 )
   }
 
+  testQuietly("StreamExecution metadata garbage collection") {
+val inputData = MemoryStream[Int]
+val mapped = inputData.toDS().map(6 / _)
+
+// Run 3 batches, and then assert that only 1 metadata file is left at 
the end
+// since the first 2 should have been purged.
+testStream(mapped)(
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3),
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3, 6, 3),
+  AddData(inputData, 4, 6),
+  CheckAnswer(6, 3, 6, 3, 1, 1),
+
+  AssertOnQuery("metadata log should contain only one file") { q =>
+val metadataLogDir = new 
java.io.File(q.offsetLog.metadataPath.toString)
+val logFileNames = 
metadataLogDir.listFiles().toSeq.map(_.getName())
+val toTest = logFileNames.filter(! _.endsWith(".crc"))  // 
Workaround for SPARK-17475
+assert(toTest.size == 1 && toTest.head == "2")
+true
--- End diff --

It still fails. There was an assert there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15166: [SPARK-17513][SQL] Make StreamExecution garbage-c...

2016-09-20 Thread frreiss
Github user frreiss commented on a diff in the pull request:

https://github.com/apache/spark/pull/15166#discussion_r79722643
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
 ---
@@ -125,6 +125,30 @@ class StreamingQuerySuite extends StreamTest with 
BeforeAndAfter {
 )
   }
 
+  testQuietly("StreamExecution metadata garbage collection") {
+val inputData = MemoryStream[Int]
+val mapped = inputData.toDS().map(6 / _)
+
+// Run 3 batches, and then assert that only 1 metadata file is left at 
the end
+// since the first 2 should have been purged.
+testStream(mapped)(
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3),
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3, 6, 3),
+  AddData(inputData, 4, 6),
+  CheckAnswer(6, 3, 6, 3, 1, 1),
+
+  AssertOnQuery("metadata log should contain only one file") { q =>
+val metadataLogDir = new 
java.io.File(q.offsetLog.metadataPath.toString)
+val logFileNames = 
metadataLogDir.listFiles().toSeq.map(_.getName())
+val toTest = logFileNames.filter(! _.endsWith(".crc"))  // 
Workaround for SPARK-17475
+assert(toTest.size == 1 && toTest.head == "2")
+true
--- End diff --

This line ("true") shouldn't be here. It makes the Assert always pass, even 
when the condition on the previous line isn't satisfied.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15166: [SPARK-17513][SQL] Make StreamExecution garbage-c...

2016-09-20 Thread wangmiao1981
Github user wangmiao1981 commented on a diff in the pull request:

https://github.com/apache/spark/pull/15166#discussion_r79720299
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
 ---
@@ -125,6 +125,30 @@ class StreamingQuerySuite extends StreamTest with 
BeforeAndAfter {
 )
   }
 
+  testQuietly("StreamExecution metadata garbage collection") {
+val inputData = MemoryStream[Int]
+val mapped = inputData.toDS().map(6 / _)
+
+// Run 3 batches, and then assert that only 1 metadata file is left at 
the end
+// since the first 2 should have been purged.
+testStream(mapped)(
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3),
+  AddData(inputData, 1, 2),
+  CheckAnswer(6, 3, 6, 3),
+  AddData(inputData, 4, 6),
+  CheckAnswer(6, 3, 6, 3, 1, 1),
+
+  AssertOnQuery("metadata log should contain only one file") { q =>
+val metadataLogDir = new 
java.io.File(q.offsetLog.metadataPath.toString)
+val logFileNames = 
metadataLogDir.listFiles().toSeq.map(_.getName())
+val toTest = logFileNames.filter(! _.endsWith(".crc"))  // 
Workaround for SPARK-17475
--- End diff --

Is the space between `!` and `_` intentionally added? I saw other similar 
code not having a space.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org