Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14615
This seems pretty reasonable, assuming test coverage already exists on that
toJSON method.
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14573#discussion_r76717713
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -116,6 +116,14 @@ object SQLConf {
.longConf
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14649
This looks very helpful @andreweduffy ! I see you have logging for how
effective the partition filtering is. Do you have any rough benchmarks of
particular workflows that were improved by this PR
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14649#discussion_r76716506
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -58,6 +60,9 @@ class ParquetFileFormat
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14649#discussion_r76716449
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -220,6 +220,21 @@ trait FileFormat
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14671
@HyukjinKwon do you have time to work on that benchmark over the next week?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14733
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14733#discussion_r76714035
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -125,12 +129,37 @@ case class
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14563
Agreed that it's better to avoid tiny PRs for typos where possible, though
when I'm doing something in the moment and encounter a typo I'd rather submit
at least _some_ PR rathe
Github user ash211 commented on the issue:
https://github.com/apache/spark/pull/14566
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/14581
Correct example value for spark.ssl.YYY.XXX settings
Docs adjustment to:
- link to other relevant section of docs
- correct statement about the only value when actually other values are
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/14566
Make logDir easily copy/paste-able
In many terminals double-clicking and dragging also includes the trailing
period. Simply remove this to make the value more easily copy/pasteable
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/14563
Typo: Fow -> For
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ash211/spark patch-8
Alternatively you can review and apply these changes
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/12951#issuecomment-219463293
Ping, anything more needed on this PR before merging?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/12880
SPARK-15104 Fix spacing in log line
Otherwise get logs that look like this (note no space before NODE_LOCAL)
```
INFO [2016-05-03 21:18:51,477
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12252#discussion_r59823299
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -246,13 +247,23 @@ object JdbcUtils extends Logging
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12252#discussion_r59813944
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -246,13 +247,23 @@ object JdbcUtils extends Logging
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/12171#issuecomment-207240207
@JoshRosen Guava and Jetty excludes might have been done when they were
shaded:
https://issues.apache.org/jira/browse/SPARK-2848 - shade Guava
https
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/11223#issuecomment-191497186
@mccheah how's this going? What more do you need to do to get this in?
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/8816#issuecomment-141480532
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/6652#issuecomment-109390385
@shivaram presumeably you're working on locality for reducers because it
improves overall job time by decreasing the amount of shuffle bytes sent over
the network
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/6104#issuecomment-102089064
There was talk earlier of referencing the window function API that jooq
uses when implementing this in SparkSQL. Is it a goal to make this similar to
jooq's s
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/4106#issuecomment-96850330
@mccheah is this work still active? If not should maybe close this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5080#issuecomment-94980262
I would find the ROLLUP feature useful for some of my workflows. If we're
trying to keep the SQLParser forward compatible indefinitely, what are your
plans for achi
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5597#issuecomment-94622067
Should a bug should be filed with Mesos to log the error once rather than
repeatedly and filling up disk?
---
If your project is set up for it, you can reply to this
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5601#issuecomment-94604123
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-93224863
I think it's best to open a new PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5512#issuecomment-93154197
Really appreciate this work @JoshRosen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5516#issuecomment-93152146
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5380#issuecomment-90318628
@punya can you create a Jira ticket for this and put it in the PR subject
like "[SPARK-12345] Bump version of apache commons-math3" ?
---
If your project is
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5255#issuecomment-87511855
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/5042#issuecomment-81768782
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/4693#issuecomment-75080190
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/4481#issuecomment-74346577
@mccheah for our use case what values would you set these to? Just bump
the retry count up to like 10?
---
If your project is set up for it, you can reply to this email
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/4475#issuecomment-73556492
// the [MLLib] is so this PR shows up here:
https://spark-prs.appspot.com/#mllib
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/4475#issuecomment-73556355
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/4139#issuecomment-70975989
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2310#issuecomment-70440912
Sounds good, I concur. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4074#discussion_r23125261
--- Diff: core/src/test/scala/org/apache/spark/rdd/RDDSuite.scala ---
@@ -545,6 +546,12 @@ class RDDSuite extends FunSuite with
SharedSparkContext
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/1297#issuecomment-69521957
@jegonzal https://issues.apache.org/jira/browse/SPARK-4672 is relevant for
specifically GraphX encountering the stack overflow and has extensive
discussion, but I don
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68966151
Jenkins please test again
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68934167
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68802858
Unfortunately I'm more of an interested bystander than doing real code
review on this PR. There doesn't seem to be a go-to person for asking about
Mesos either
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-68802762
// I now have permissions to do this
Jenkins this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3841#issuecomment-68760646
@WangTaoTheTonic I understand the problem you're observing and think it's
related specifically to YARN. Without being super familiar with YARN, I think
the app
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3841#discussion_r22482464
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ExecutorRunnableUtil.scala ---
@@ -75,8 +75,9 @@ trait ExecutorRunnableUtil extends Logging
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3841#discussion_r22482224
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1691,15 +1691,12 @@ private[spark] object Utils extends Logging
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3882#discussion_r22428199
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -269,6 +269,43 @@ class SQLContext(@transient val sparkContext:
SparkContext
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3882#issuecomment-68577529
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3861#discussion_r22418308
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedMesosExecutorBackend.scala
---
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3880#issuecomment-68550117
Hi @hxfeng I think this might be an accidental pull request -- merging 1.2
back into master would be a huge change!
Would you mind closing this PR? Thanks
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3876#issuecomment-68549893
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3875#issuecomment-68549854
Matches error message from 20 lines up, so LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3879#issuecomment-68549708
Hi @hxfeng did you mean to send this in? I don't see any code change, just
an empty merge commit. Would you mind closing this pull request if it was sent
acciden
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3861#discussion_r22417818
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -998,7 +998,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3870#issuecomment-68506968
Jenkins this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3855#issuecomment-68475839
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3763#issuecomment-67923578
Oh whoops my apologies for the noise
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3763#issuecomment-67907393
@andrewor14 would you mind merging this into `branch-1.1` also?
@JoshRosen pointed out it should go there too
---
If your project is set up for it, you can reply to
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3751#issuecomment-67871664
This is a very minor change -- do we need a Jira ticket for it?
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3755#issuecomment-67778586
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-67611926
Filed as https://issues.apache.org/jira/browse/SPARK-4899
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/60#issuecomment-67611653
I'll file a Jira for this shortly
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3722#issuecomment-67519651
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3722#issuecomment-67441145
Also @msiddalingaiah commits should typically have a SPARK jira ticket
associated with them so we can properly credit contributors when we do release
notes, plus tie a
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3722#issuecomment-67441155
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3722#issuecomment-67434853
Ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3689#discussion_r21794489
--- Diff: docs/configuration.md ---
@@ -1142,6 +1142,55 @@ Apart from these, the following properties are also
available, and may be useful
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-66905391
I took a quick look, and some of the style seemed a little off from the
rest of Spark. I'm guessing the first Jenkins run will flag a few style errors
so it cou
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-66905021
@tnachen I don't have permissions to have Jenkins test this PR but
@pwendell does.
---
If your project is set up for it, you can reply to this email and have your
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2519#issuecomment-66710155
Agreed that there's probably not a ton that's immediately tunable. But
someone looking to "make it faster" could read this section, realize that the
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3658#issuecomment-66402583
cc @mccheah
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3130#issuecomment-65878128
I think a core disconnect here is that the Spark team thought the majority
use of Spark in applications would be through the spark-submit script. But
Matt and I (and
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3412#issuecomment-64088579
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3410#issuecomment-64052615
Big +1 on putting the Spark version somewhere in all the various webuis. I
probably wouldn't put it that big in the header, but definitely somewhere.
---
If
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3349#issuecomment-63585963
@pwendell are you guys taking docs-only changes this late in the game for
1.2.0 ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3349#discussion_r20548296
--- Diff: docs/running-on-mesos.md ---
@@ -183,6 +183,47 @@ node. Please refer to [Hadoop on
Mesos](https://github.com/mesos/hadoop).
In either case
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2518#issuecomment-63103760
@JoshRosen A PySpark test failed on this docs-only PR. Any idea what might
be wrong or who would be good to go about debugging the flaky test?
---
If your project is
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/681#issuecomment-63045363
Correction, please use SPARK-1422
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/681#issuecomment-63043646
Oh and also, all GitHub PRs should have a Jira SPARK-XYZ title in the
title. I created one for you -- can you please update this PR name to be
"SPARK-4400 Added Spa
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2518#issuecomment-63025274
@andrewor14 Updated, would you be able to merge in time for the Spark 1.2.0
release?
---
If your project is set up for it, you can reply to this email and have your
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2519#issuecomment-63024861
In the absence of feedback about the above questions and in an effort to
clarify this at least somewhat in the docs, I think we should merge this
docs-only PR as-is for
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/681#issuecomment-63024468
@sigmoidanalytics Did you ever take a look at libcloud to see if we could
abstract out some commonality with the spark_ec2 scripts?
---
If your project is set up for it
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/3120#issuecomment-62996573
@ksakellis it looks like this has a merge conflict now -- would you mind
updating this PR?
---
If your project is set up for it, you can reply to this email and have
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-62659168
I would appreciate merging this too -- I'd like to use Accumulators but
until their correctness is fixed I can't trust them for my use case.
---
If your proj
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2984#issuecomment-60862101
One motivation for this change is to support wrapping an
application-specific REST API around Spark using the Dropwizard webapp
framework, which is using Jetty 9
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2867#issuecomment-60028659
The first thing I can imagine doing if I start looking at these is to just
check them all. Thoughts to making that operation easier, with say a check-all
button
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2828#issuecomment-59639085
This is EXCELLENT work @JoshRosen ! Looking forward to future integration
tests that cover these sorts of behaviors.
---
If your project is set up for it, you can reply
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2684#issuecomment-59581783
More flavor on the perf numbers was we ran 6 jobs in a row before and after
(starting up a new driver on each job), discarded the first run, and took the
average of the
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r19002923
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18976148
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -341,7 +341,11 @@ private[spark] class Master(
case Some
Github user ash211 closed the pull request at:
https://github.com/apache/spark/pull/2664
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2664#issuecomment-58454687
That sounds very reasonable -- I'll close. Maybe in the future we should
have a "things to think about when going into production" section of the docs
b
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-57956057
I set the Target Version on SPARK-3166 to 1.2.0 so we can try to get this in
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2648#issuecomment-57955322
This seems right to me yingjie. Let's see if the tests work
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2662#issuecomment-57954626
I believe this was introduced in https://github.com/apache/spark/pull/2609
-- any idea why Jenkins didn't catch the build issue?
cc @mccheah
---
If your pr
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/2664
SPARK-3805 Set spark.worker.cleanup.enabled to true by default
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ash211/spark SPARK-3805
Github user ash211 commented on the pull request:
https://github.com/apache/spark/pull/2401#issuecomment-57760828
This looks very reasonable. Counting the executor's bookkeeping core
against the resources also seems much more correct than pretending it doesn't
exist l
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2401#discussion_r18382851
--- Diff: docs/configuration.md ---
@@ -253,6 +253,17 @@ Apart from these, the following properties are also
available, and may be useful
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2401#discussion_r18122496
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MemoryUtils.scala
---
@@ -0,0 +1,34 @@
+/*
+ * Licensed to the Apache Software
101 - 200 of 359 matches
Mail list logo