Github user shivaram closed the pull request at:
https://github.com/apache/spark/pull/15029
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15015
Thanks @keypointt - Merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14980
Yeah I think it'll be good to do a separate PR and make sure it can build
corresponding to the Scala code in branch-2.0 etc. But lets do it after all the
comments here are addressed and th
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15051#discussion_r78301160
--- Diff: R/pkg/R/mllib.R ---
@@ -694,8 +694,8 @@ setMethod("predict", signature(object = "KMeansModel"),
#' }
#
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15051#discussion_r78406032
--- Diff: R/pkg/R/mllib.R ---
@@ -694,8 +694,8 @@ setMethod("predict", signature(object = "KMeansModel"),
#' }
#
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/1
The easiest way to debug the R test case is to see what has changed for
this particular `select` query that is failing and then update the test case.
If you have compiled with `-Psparkr
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14980
Thanks @junyangq and @felixcheung - Merging this into master once the
AppVeyor check passes
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14980
@junyangq As we discussed before, lets open a new PR for 2.0 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/1
@eyalfa Yes - we get the schema from Scala and assign the column names from
that [1]. I think @hvanhovell knows more about the compatibility requirements
Also @HyukjinKwon if there is a
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/1
Yeah the R change looks fine.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/1
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15097
Thanks @WeichenXu123 Can we add some test cases for this ? Also was this
fix related to any problem you saw ? If so we can also include that in the test
cases etc.
---
If your project is set up
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15100#discussion_r78799049
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -0,0 +1,653 @@
+---
+title: "SparkR - Practical Guide"
+output:
+ htm
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15100
Merging this into branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15100
@junyangq We need to manually close this PR as github doesn't pick up
commits not merged to master. Can you close this when you get a chance ?
---
If your project is set up for it, you can
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15131
It looks like `addFile` isn't working on Windows because we try to convert
the windows file path into a URI and that fails. Not sure what the fix is in
this case.
cc @HyukjinKwo
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15051#discussion_r79297229
--- Diff: R/pkg/R/mllib.R ---
@@ -694,8 +694,14 @@ setMethod("predict", signature(object = "KMeansModel"),
#' }
#
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/15200
Skip building R vignettes if Spark is not built
## What changes were proposed in this pull request?
When we build the docs separately we don't have the JAR files from the
Spark bui
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15200
The only downside of this is that we won't be uploading our vignette to the
website. I think this is fine -- Any thoughts @junyangq @felixcheung
cc @rxin
---
If your project is s
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15223
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/15223
Set R package version number along with mvn
## What changes were proposed in this pull request?
This PR sets the R package version while tagging releases. Note that since
R doesn
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15200
Yeah - so I'm thinking we should just auto-generate this and check in the
html file in git. Its not that big. When somebody updates the vignette we need
to remind them to regenerate it though
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15232
Thanks @HyukjinKwon - Fix looks good. But isn't it better to do the check
before we do the na.omit / as.integer ? Or will that miss some cases ?
---
If your project is set up for it, yo
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15328
AFAIK java arrays also have the same limitation - So if we convert lists to
arrays, this will also fail on the JVM side ?
---
If your project is set up for it, you can reply to this email and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14195
I guess it would be good to have an additional test case, but I'm going to
merge this to make sure this makes 2.0 RCs. LGTM. We could look at ways to
make this more robust in the future
-
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14192
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70846132
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r70855973
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,9 @@ sparkR.sparkContext <- function(
existingPort <- Sys.getenv("EXISTING_SPARKR_B
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/14208
[SPARK-16553][DOCS] Fix SQL example file name in docs
## What changes were proposed in this pull request?
Fixes a typo in the sql programming guide
## How was this patch tested
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14208
cc @rxin @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/13868#discussion_r70896624
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -691,7 +692,8 @@ private[sql] class SQLConf extends Serializable with
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70920785
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70922747
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70922863
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70923795
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14206
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r71041580
--- Diff: docs/sparkr.md ---
@@ -295,8 +294,7 @@ head(collect(df1))
# dapplyCollect
Like `dapply`, apply a function to each partition of
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r71041809
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,135 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r71041878
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,135 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
Thanks @NarineK for the updates. As a final thing I just had some
formatting problems when I tested out this change locally. Let me know if you
can't reproduce them. I just ran
```
cd
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14179
@krishnakalyan3 We don't need to modify the message. You can keep the
original message and just split it across two lines with something like the
`paste` function used in
https://githu
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
So I just tried the setup where I only have `enableHiveMetastore=F` for the
test case we are uncommenting and the `sparkR.session.stop` added to the other
test files as in this PR. That seems to
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
@felixcheung Just to confirm, based on discussion in SPARK-16508 is this
change good to merge ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
Thanks @NarineK - I tried it on a fresh Ubuntu VM and it rendered fine. I
think it has something to do with ruby / jekyll versions. The rendered docs
looked fine on the Ubuntu VM
LGTM
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r71073655
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,10 @@ sparkR.sparkContext <- function(
existingPort <- Sys.getenv("EXISTING_SPARKR_B
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
Cool. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
Hmm ok - The only difference in the patch I tried out locally is that I had
the `sleep` in the loop test case. Did you remove that for some other reason ?
---
If your project is set up for it
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
I just realized that my local build was not using the hive profile. If this
fails on Jenkins let's just go back to the original PR. Also I wonder if this
is something we should notify th
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14179
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
@felixcheung I plan to merge this into master but skip branch-2.0 as I dont
want to introduce new test errors if we have another RC. Let me know if that
sounds good
---
If your project is set up
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/14243
[SPARK-10683][SPARK-16510][SPARKR] Move SparkR include jar test to
SparkSubmitSuite
## What changes were proposed in this pull request?
This change moves the include jar test from R to
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
LGTM. Merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
cc @sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14243#discussion_r71180619
--- Diff: R/pkg/inst/tests/testthat/jarTest.R ---
@@ -16,17 +16,17 @@
#
library(SparkR)
-sparkR.session()
+sc <- sparkR.sess
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
@sun-rui Thats a good point. Right now the test will be canceled if R is
not installed (from the line `assume(R.isInstalled)`. I also added a check to
make sure SparkR is installed in `SPARK_HOME
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/12836
@NarineK Not as far as I know
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14250
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
cc @felixcheung @mengxr @sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
Thanks @junyangq I'll take a look at this today. One question I had is
about adding `install_spark` as a fallback option in `sparkR.session` if the
jars are not found. Can we add that in th
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
Added the check to the `ignore` test case as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r71377561
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,84 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r71405599
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,84 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14264
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r71451304
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,84 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
Thanks @sun-rui - Merging this to master and branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14309
I'm not sure I am good reviewer for this as I dont fully understand the
consequences inside SQL for this change. cc @liancheng @rxin
---
If your project is set up for it, you can reply to
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14309
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14329
Thanks @felixcheung - LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14330
Yeah lets discuss this on JIRA / mailing lists and then get back to this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14357
@junyangq Any idea how the tests were passing on Jenkins before this fix ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14357
I think this is related to
https://github.com/apache/spark/commit/142df4834bc33dc7b84b626c6ee3508ab1abe015
cc @dongjoon-hyun
---
If your project is set up for it, you can reply to this
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72663430
--- Diff: R/pkg/R/sparkR.R ---
@@ -365,6 +365,23 @@ sparkR.session <- function(
}
overrideEnvs(sparkConfigMap, param
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
Thanks @junyangq - this is looking pretty good. It would be good to add a
test for this (some discussion below).
@mengxr The R package version is still set as 2.0.0 in `DESCRIPTION` - so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
@junyangq I just ran the CRAN checks locally and I see the problem you ran
into in #14357 -- The problem is that if we try to run tests which depend on a
Java-side change in master but not in
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
Not sure clean + rebuild will solve the problem here. The problem here is
that we load the Spark 2.0.0 JARs using `install_spark` (i.e. that didn't have
the fix in #14095) and we use R test
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r73006612
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,232 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r73007560
--- Diff: R/pkg/R/sparkR.R ---
@@ -365,6 +365,23 @@ sparkR.session <- function(
}
overrideEnvs(sparkConfigMap, param
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14433
I think a better change might be to change that message if we are launching
SparkR ? cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
@NarineK Thanks for the PR. The thing I worry about is that this will break
any code users write with the 2.0 release and they'll need to change their code
if we ship this in 2.1 -- Other
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14392#discussion_r73079016
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +659,106 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
Yeah I think something like that is fine. Basically doing some
pre-processing or post-processing after the UDF has run using our own R code is
a good way to add new features
---
If your project
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
@junyangq Is #14448 different from this PR or is it the same one on
branch-2.0 ? I can just merge this into two branches, so we dont need a new PR
I think
---
If your project is set up for it
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
I see - So I was thinking that we could merge this into master as well as
its not going to fail any tests or affect any users building SparkR from source
-- I dont think we make any promises about
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/9290#discussion_r43413963
--- Diff: R/pkg/R/sparkR.R ---
@@ -93,7 +93,7 @@ sparkR.stop <- function() {
#' sc <- sparkR.init("local[2]", &
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9363#issuecomment-152339112
LGTM. Thanks @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9366#issuecomment-152387852
cc @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9368#issuecomment-152414716
LGTM. Thanks @sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9196#issuecomment-152601228
LGTM. Merging this. Thanks @sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9099#issuecomment-152602683
@zero323 @sun-rui Can we remove the unit tests for the list / environment
stuff from this PR ? From what I understand these problems already exist
irrespective of this
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9290#issuecomment-152648153
LGTM. Thanks @felixcheung -- Merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9390#issuecomment-152774685
cc @brkyvz @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9218#issuecomment-152774865
My take is that its even fine if we only support basic types through
`coltypes<-` for now and throw a `complex types not supported` error message
for now. That way
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-152774944
@rxin can you comment on how important sessions are (esp. with respect to
SparkR) ? If they are not important we can significantly simplify things and
just support one
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9394#issuecomment-152859852
On this note can we update the roxygen doc for DataFrames ?
http://spark.apache.org/docs/latest/api/R/DataFrame.html seems pretty
incomplete.
---
If your project is
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9401#issuecomment-153002049
LGTM. Thanks for adding this @felixcheung !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9394#issuecomment-153125114
Yeah but the DataFrame.html doesn't seem very useful the way it is right
now -- can you just update the roxygen docs in this PR as well ?
---
If your project i
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/9390#issuecomment-153135758
@sun-rui Thanks for the PR. It looks like this changes a lot of parts of
the code though, so we'll need to review it carefully.
Are there any alte
901 - 1000 of 2530 matches
Mail list logo