Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/240#issuecomment-38776405
Uh, create a different PR is a good idea
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user witgo reopened a pull request:
https://github.com/apache/spark/pull/234
Fix SPARK-1325: The maven build error for Spark Tools
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1325
Alternatively you
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/234
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/325
Fix SPARK-1413: Parquet messes up stdout and stdin when used in Spark REPL
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1413
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/325#issuecomment-39595364
There is a problem I do not know what the reason is cause
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
case class
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/325#issuecomment-39619758
[class
parquet.Log](https://github.com/Parquet/parquet-mr/blob/master/parquet-common/src/main/java/parquet/Log.java)
has a static block ( add a default handler in case
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/325#issuecomment-39637576
`Seq(parquet.hadoop.ColumnChunkPageWriteStore, ...` parent logger is
`Logger.getLogger(parquet)`. only need to set the
`Logger.getLogger(parquet
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/332
remove scalalogging-slf4j dependency
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark remove_scalalogging
Alternatively you can review
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/333
Fix SPARK-1420 The maven build error for Spark Catalyst
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1420
Alternatively you
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/332#issuecomment-39701040
I did not find the call that affect performance
It is possible that here:
Spark Catalyst `logger.debug` is called many times
May be like
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/325#issuecomment-39706859
@AndreSchumacher
[parquet.Log](https://github.com/Parquet/parquet-mr/blob/master/parquet-common/src/main/java/parquet/Log.java)
has a static block ( add a default
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/126#issuecomment-39816405
Good job, guys.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/357
Fix:SPARK-1441 Compile Spark Core error with Hadoop 0.23.x
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1441
Alternatively
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/325#issuecomment-39819593
@AndreSchumacher
You're right,the code has been modified.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/325#discussion_r11467386
--- Diff: core/src/main/scala/org/apache/spark/Logging.scala ---
@@ -135,4 +136,6 @@ trait Logging {
private object Logging {
@volatile private var
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/325#discussion_r11472274
--- Diff: core/src/main/scala/org/apache/spark/Logging.scala ---
@@ -135,4 +136,6 @@ trait Logging {
private object Logging {
@volatile private var
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/325#issuecomment-40052128
@pwendell
Thank you, this patch works
But this only solve one problem. The Spark the dependence of the is fixed,
we can only use log4j
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/332#issuecomment-40221509
How to let Jenkins to run the test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/379#issuecomment-40224441
@andrewor14
I don't speak english well. On weekends, I will write a document.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/379#issuecomment-40279118
@andrewor14 ,@tdas, mind reviewing this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/357#issuecomment-40301947
@srowen mind reviewing the PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/357#issuecomment-40304913
So, if someone compile the spark with hadoop 0.23.x how to automatically
activate the profile
```xml
profile
idyarn-alpha/id
dependencies
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/357#issuecomment-40305799
```xml
activation
propertynamehadoop.version/namevalue[0.23,0.24)/value/property
/activation
```
It doesn't work
see
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/379#issuecomment-40327489
@andrewor14 ,@tdas , @pwendell ,mind reviewing the PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/412
Make distribution
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark make_distribution
Alternatively you can review and apply
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/414
improve the readability of SparkContext.scala
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SparkContext
Alternatively you can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/414#issuecomment-40675137
From the side of the life cycle for component, only stop method is
incomplete, and the initialize code , start code written separately is better
---
If your project
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/299#discussion_r11720287
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -123,6 +142,14 @@ object SparkSubmit {
val options = List
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/433#discussion_r11726876
--- Diff: project/SparkBuild.scala ---
@@ -52,7 +52,7 @@ object SparkBuild extends Build {
val SCALAC_JVM_VERSION = jvm-1.6
val JAVAC_JVM_VERSION
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/435
pom.xml modifications added to SparkBuild.scala
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SparkBuild
Alternatively you can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/332#issuecomment-40863767
@marmbrus
You are right ,macros is faster than the inline method. There's a short gap.
the [test
code](https://github.com/witgo/spark/blob/logger/core/src/test
GitHub user witgo reopened a pull request:
https://github.com/apache/spark/pull/234
Fix org.scala-lang: * inconsistent versions
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1325
Alternatively you can
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/234
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/465
Fix org.scala-lang: * inconsistent versions dependency for maven
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1325
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/465#issuecomment-41006929
@srowen
I'm sorry, I submitted a modified on the
sql/catalyst/pom.xml,sql/hive/pom.xml,sql/core/pom.xml (Four spaces formatted
into two spaces).
---
If your project
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/357
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/480
Improved build configuration
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark format_pom
Alternatively you can review and apply
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11852400
--- Diff: pom.xml ---
@@ -519,6 +519,44 @@
/exclusions
/dependency
dependency
+groupIdorg.apache.avro/groupId
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11852495
--- Diff: pom.xml ---
@@ -793,6 +831,157 @@
/build
profiles
+!-- SPARK-1121: Adds an explicit dependency on Avro to work around
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11852713
--- Diff: bagel/pom.xml ---
@@ -31,20 +31,6 @@
nameSpark Project Bagel/name
urlhttp://spark.apache.org//url
- profiles
--- End diff
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11852992
--- Diff: examples/pom.xml ---
@@ -124,6 +110,10 @@
groupIdcommons-logging/groupId
artifactIdcommons-logging/artifactId
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11856833
--- Diff: pom.xml ---
@@ -892,10 +1081,11 @@
dependency
groupIdorg.apache.zookeeper/groupId
artifactIdzookeeper
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11858762
--- Diff: pom.xml ---
@@ -892,10 +1081,11 @@
dependency
groupIdorg.apache.zookeeper/groupId
artifactIdzookeeper
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/435
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/465
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/412
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/502#issuecomment-41144046
Why change using Maven build? [The PR
480](https://github.com/apache/spark/pull/480) has some relevant changes.
---
If your project is set up for it, you can reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/299#issuecomment-41150159
SPARK DAEMON_OPTS seems to have no effect
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/510
Modify spark.ui.killEnabled default is false
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark killEnabled
Alternatively you can review
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1031#issuecomment-52016199
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1022#issuecomment-52138943
YARN does not seem to do any processing. We can use the solution in #1031
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1983
[WIP][SPARK-1405]Collapsed Gibbs sampling based Latent Dirichlet Allocation
This PR is based on @yinxusen's #476
You can merge this pull request into a Git repository by running:
$ git pull
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1619
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1603
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1989
In the stop method of ConnectionManager to cancel the ackTimeoutMonitor
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/228#issuecomment-52419470
How about record the `(stageId,partitionId)` in `Accumulable `?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1822#issuecomment-52491562
Try this: `git commit -m Big-ass commit --allow-empty` `git rebase -i
master`, `git push origin sql-memory-patch -f `
---
If your project is set up for it, you can
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1931#discussion_r16464399
--- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala ---
@@ -66,10 +66,15 @@ private[spark] class ContextCleaner(sc: SparkContext)
extends
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2056
[SPARK-3139] Akka timeouts from ContextCleaner when cleaning shuffles
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-3139
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2056#issuecomment-52870430
I think this the root cause here
[ShuffleBlockManager.scala#L207](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2075
[SPARK-3169]: Fix make-distribution.sh failed
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-3169
Alternatively you can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52882546
I think we need to modify this file: ` sql/hive-thriftserver/pom.xml`
```xml
dependency
groupIdorg.spark-project.hive/groupId
artifactIdhive
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-52884754
`./make-distribution.sh -Pyarn -Phadoop-2.3 -Phive-thriftserver -Phive
-Dhadoop.version=2.3.0`.
`./bin/spark-sql --hiveconf hive.root.logger=INFO,console` seems
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1726#issuecomment-52922473
I tested, but compile failed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1726#discussion_r16541517
--- Diff: external/flume-sink/pom.xml ---
@@ -65,12 +66,9 @@
/exclusions
/dependency
dependency
- groupIdorg.scala-lang
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2083
[WIP][SPARK-3098]In some cases, the result of RDD.distinct is inconsistent
cc @srowen
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/2075
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2056#issuecomment-53149146
In `removeShuffleBlocks`
```
for (mapId - state.completedMapTasks; reduceId - 0 until
state.numBuckets) {
val blockId = new ShuffleBlockId
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1341#issuecomment-53364537
@andrewor14 done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1330#issuecomment-53365696
@andrewor14 , @srowen
This is mainly to solve the problem of importing the
scala.language.postfixOps and org.scalatest.time.SpanSugar._ at the same time
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2056#issuecomment-53546333
@tdas @pwendell We do not need to wait for clear `RDD`,`Broadcast`
#2143 does not solve the timeout in [removeShuffle
method](https://github.com/apache/spark/blob
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2056#issuecomment-53664268
Increase the timeout in [removeBroadcast
method](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/storage/BlockManagerMasterActor.scala#L175
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2175
[SPARK-3273]The spark version in the welcome message of spark-shell is not
correct
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1877#issuecomment-53676540
@rxin could you take a look at this PR? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1877#issuecomment-53677259
[SPARK-3224](https://issues.apache.org/jira/browse/SPARK-3224) is the same
problem.
This PR adds some boundary judgments and removed some redundant code
---
If your
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/2175#discussion_r16822912
--- Diff: repl/src/main/scala/org/apache/spark/repl/SparkILoopInit.scala ---
@@ -26,7 +26,7 @@ trait SparkILoopInit
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-53700053
@mengxr This patch removed the `accumulable` operation . repair formula
errors in `dropOneDistSampler ` method and some of the performance
optimization. About how I
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2175#issuecomment-53830851
@nchammas We should create a separate jira for the python-related issues
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2175#issuecomment-53830862
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2197
[SPARK-3302] The wrong version information in SparkContext
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-3302
Alternatively
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/2197
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-54065708
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/2083
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1565#issuecomment-54252174
OK, I close it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1565
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/2196
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2196#issuecomment-54267918
@ScrapCodes @nchammas
I merge this PR into #2175 and close this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1022#issuecomment-54268060
I merge this PR into #1031 and close this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1022
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2265
[Minor]Remove extra semicolon in FlumeStreamSuite.scala
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark FlumeStreamSuite
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2268
[SPARK-3397] Bump pom.xml version number of master branch to 1.2.0-SNAPSHOT
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2268#issuecomment-54441310
@srowen I agree with you. But
[SparkContext.SPARK_VERSION](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L1300)
has
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2035#issuecomment-54443276
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/2056
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1788
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2311
[SPARK-3293] yarn's web show SUCCEEDED when the driver throw a exception
in yarn-client
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1482#issuecomment-54917849
@aarondav I understand what you mean,I will submit the relevant code
tomorrow.
BTW,most of the OOM are present in deserialization process.
---
If your project is set
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/2326
[Minor] rat exclude dependency-reduced-pom.xml
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark rat-excludes
Alternatively you can
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1877#discussion_r17302741
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1046,41 +1046,37 @@ class DAGScheduler(
case FetchFailed
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1877#issuecomment-54976720
screenshots:
![qq20140909-1](https://cloud.githubusercontent.com/assets/302879/4203071/131c5292-382d-11e4-88d3-6d9bb50a8389.png)
---
If your project is set up
301 - 400 of 866 matches
Mail list logo