Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43589674
[The
code](https://github.com/witgo/spark/commit/6d7f2408a40bf4bb2889bf66fa61bced782cdefc#diff-2b593e0b4bd6eddab37f04968baa826c)
will make the checkpoint directory larger
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/835#issuecomment-43592654
Looks good TD, thanks for doing this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43593158
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43593170
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43593512
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43593523
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43593862
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43593869
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43594231
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43594221
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43595109
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15099/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43595108
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43595307
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43595318
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43595417
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15100/
---
If your project is set up for it, you can
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/811
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43595692
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43595693
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15098/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43596699
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15102/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43596698
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43597231
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43597220
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43598989
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43598990
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15103/
---
If your project is set up for it, you can
Github user ueshin commented on the pull request:
https://github.com/apache/spark/pull/825#issuecomment-43599450
@rxin Thank you for your comment.
I checked the code #734, not deeply yet, though.
It seems like broadcast hash join is used only for `Inner` join so
broadcast
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43601624
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15101/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/772#issuecomment-43601623
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user ueshin opened a pull request:
https://github.com/apache/spark/pull/836
[SPARK-1889] [SQL] Apply splitConjunctivePredicates to join condition while
finding join ke...
...ys.
When tables are equi-joined by multiple-keys `HashJoin` should be used, but
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43602450
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/836#issuecomment-43602467
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ghidi commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43603009
I changed Thread.getContextClassLoader with
Utils.getContextOrSparkClassLoader.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43604360
Hey @ghidi
Sorry I should have mentioned. In order to speed up the process (so that I
can cut another RC for Spark 1.0), I cloned your branch and made the fix
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/837
add support for left semi join
Just submit another solution for #395
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/adrian-wang/spark
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/837#issuecomment-43605087
This is a solution with #418 from @marmbrus .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/837#issuecomment-43605156
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ghidi closed the pull request at:
https://github.com/apache/spark/pull/821
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43608181
@mateiz @mengxr
I added a new operation `cachePoint` of RDD
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/836#issuecomment-43608895
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/836#issuecomment-43608898
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15104/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43608896
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/821#issuecomment-43608897
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15105/
---
If your project
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/791#issuecomment-43611475
As we know, memory store is used for add, read, remove blocks. Reading and
removing is quite simple, so let's focus on adding.
Adding may trigger dropping action,
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/791#issuecomment-43618603
It is not MT safe because the PR is checking/modifiying shared state (like
dropping variable) in an unsafe manner.
I will comment in detail on the patch later today
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12840780
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -166,45 +166,51 @@ private class MemoryStore(blockManager: BlockManager,
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12840885
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -243,10 +250,13 @@ private class MemoryStore(blockManager: BlockManager,
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r12841208
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -166,45 +166,51 @@ private class MemoryStore(blockManager: BlockManager,
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43654286
@marmbrust
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/838
[Hotfix] Blacklisted flaky HiveCompatibility test
`lateral_view_outer` query sometimes returns a different set of 10 rows.
You can merge this pull request into a Git repository by running:
$ git
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43654562
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43654576
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/839
[Minor] Move JdbcRDDSuite to the correct package
It was in the wrong package
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/andrewor14/spark
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43656301
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43656940
Another [solution](https://github.com/witgo/spark/compare/cachePoint).
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/838
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43658039
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43658056
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user douglaz commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-43659430
It isn't just about lines of code, it is about pollution of code using
`asInstanceOf` and runtime errors because of this and wrong pattern matching on
Sequences.
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/813#issuecomment-43663175
To throw another wrench into the Union analogy, there is also the
little-used SparkContext#union, which has signatures for both Seq[RDD[T]] and
varags RDD[T].
---
GitHub user codedeft opened a pull request:
https://github.com/apache/spark/pull/840
Adding OWL-QN optimizer for L1 regularizations. It can also handle L2 re...
Adding OWL-QN optimizer for L1 regularizations. It can also handle L2 and
L1 regularizations together (balanced with
Github user codedeft commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43665097
jira link :
https://issues.apache.org/jira/browse/SPARK-1892
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665406
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43665411
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15106/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43665408
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665407
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665409
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15108/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/839#issuecomment-43665410
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15107/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43665383
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user codedeft commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43666271
To clarify - it requires the latest breeze. The OWL-QN in breeze had bugs,
which I fixed. I'm not sure if David's published an official release yet but
it's in the
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43667509
Jenkins, test this again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user codedeft commented on the pull request:
https://github.com/apache/spark/pull/840#issuecomment-43667582
I'll try to get David to publish the latest breeze and change the project
file to reference the latest breeze.
---
If your project is set up for it, you can reply to
GitHub user kanzhang opened a pull request:
https://github.com/apache/spark/pull/841
[SPARK-1822] SchemaRDD.count() should use optimizer
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/kanzhang/spark SPARK-1822
Alternatively
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43671863
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/838#issuecomment-43672003
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43673656
@marmbrus I tried to implement the formula you gave on the mailing list.
Not sure if I missed anything. Pls take a look. Note I changed Count() to
return Long to match
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43674365
Thanks for the contribution! Could use it in my own workflows.
Python ints are signed 32 bit numbers right? Should make that a long
explicitly unless python
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/841#issuecomment-43675504
@ash211 In Python 2.X, it does promote an Int to Long when overflowing (it
still matters in doctests, where you have to be explicit about the result value
is 3 or 3L).
Github user kanzhang commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43680692
IMHO, slicing a sequence shouldn't change its element values
(floating-point representations), same for ```take``` and ```drop```.
---
If your project is set up for it,
Github user xiaocai00 commented on a diff in the pull request:
https://github.com/apache/spark/pull/734#discussion_r12868794
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins.scala ---
@@ -142,6 +136,68 @@ case class HashJoin(
/**
* ::
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/842
[Minor] Correct example of creating a new SparkConf
The example code on the configuration page currently does not compile.
You can merge this pull request into a Git repository by running:
$
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43685011
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43685020
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12869294
--- Diff: docs/streaming-programming-guide.md ---
@@ -83,21 +82,21 @@ import org.apache.spark.streaming.api._
val ssc = new StreamingContext(local,
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12870111
--- Diff: docs/streaming-programming-guide.md ---
@@ -83,21 +82,21 @@ import org.apache.spark.streaming.api._
val ssc = new StreamingContext(local,
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12870184
--- Diff: docs/streaming-programming-guide.md ---
@@ -105,23 +104,22 @@ generating multiple new records from each record in
the source DStream. In this
Github user jaceklaskowski commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12870325
--- Diff: docs/streaming-programming-guide.md ---
@@ -306,12 +304,16 @@ need to know to write your streaming applications.
## Linking
To
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43689531
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/842#issuecomment-43689533
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15109/
---
If your project is set up for it, you can
GitHub user smungee opened a pull request:
https://github.com/apache/spark/pull/843
[SPARK-1250] Fixed misleading comments in bin/pyspark, bin/spark-class
Fixed a couple of misleading comments in bin/pyspark and bin/spark-class.
The comments make it seem like the script is looking
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/843#issuecomment-43689807
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43692883
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43692891
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43693054
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43693056
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15110/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694015
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694027
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694158
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15111/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/831#issuecomment-43694156
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12877300
--- Diff: docs/streaming-programming-guide.md ---
@@ -306,12 +305,16 @@ need to know to write your streaming applications.
## Linking
To write
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12877362
--- Diff: docs/streaming-programming-guide.md ---
@@ -579,7 +582,7 @@ This is applied on a DStream containing words (say, the
`pairs` DStream containi
1)`
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/830#discussion_r12877361
--- Diff: docs/streaming-programming-guide.md ---
@@ -355,21 +358,21 @@ object has to be created, which is the main entry
point of all Spark Streaming f
A
1 - 100 of 117 matches
Mail list logo