Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
OK great. I think we should avoid breaking developer APIs, unless it has a
huge upside. It wouldn't be fun to break it just for some cosmetic things ...
---
If your project is set up for it, you can
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18749
What is the compatibility concern?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18780
If you are asking for their opinions it'd be easier if you ask more
explicitly (A vs B) in one comment, rather than asking them to go through and
read the entire thread ...
---
If your project
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18752
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18702
LGTM too.
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18697
cc @cloud-fan @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
When users upgrade from 2.11 to 2.12, their app would be broken, wouldn't
it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
@srowen I don't agree that we should just break source compatibility here.
We have already spent a lot of time doing this in the past and figuring out how
to preserve it.
---
If your project is set
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18715
Wait let's ask why @tdas did it this way...
On Sun, Jul 23, 2017 at 10:45 AM asfgit <notificati...@github.com> wrote:
> Closed #18715 <https://github.com/apache/spark/pul
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
It is still source breaking change, and this is why I was saying it would
be a lot of work to upgrade to Scala 2.12 without breaking existing source
code. For 2.12 we should get rid of the functions
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18715
cc @tdas Was there a reason to use ``?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18715
[minor] Remove in test case names in FlatMapGroupsWithStateSuite
## What changes were proposed in this pull request?
This patch removes the `` string from test names
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18709
"Create Version" isn't a good user facing description. It'd make more sense
to just say "Created by Spark xxx"
---
If your project is set up for it, you can reply to this email
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18714#discussion_r128908118
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -881,6 +881,16 @@ object SQLConf {
.intConf
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18645
@srowen You just showed that the Scala 2.12 changes are source breaking,
isn't it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18645#discussion_r128890891
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
---
@@ -353,7 +353,7 @@ class DatasetSuite extends QueryTest with
SharedSQLContext
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18645#discussion_r128890868
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskContextSuite.scala ---
@@ -54,7 +54,10 @@ class TaskContextSuite extends SparkFunSuite
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18468
Uncompress a small block at a time.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18468
Hey sorry for commenting late, but I don't think this change really makes
sense. If anything, I'd decompress data in batch into uncompressed column
batch, rather than building an adapter
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18680
Have you guys checked the performance of this change? It changes the number
of concrete implementations for column vector from 2 to 3 (and potentially 1 to
2 at runtime). This might (or might
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18487
hm is this a bug fix? if not we shouldn't cherry pick it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18306
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128162324
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -103,4 +110,19 @@ case class UserDefinedFunction
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128159939
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -103,4 +110,19 @@ case class UserDefinedFunction
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128159874
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -103,4 +110,19 @@ case class UserDefinedFunction
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17848#discussion_r128159780
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/UserDefinedFunction.scala
---
@@ -58,6 +55,13 @@ case class UserDefinedFunction protected
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17150
Are you working on 2.12?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17150
Do the removal (i.e. this PR).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17150
Maybe do it a bit later, when the backport rate drops? E.g. it's unlikely
we still do a lot of backports when 2.3 is cut.
---
If your project is set up for it, you can reply to this email and have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18606
It's already merged.
https://github.com/apache/spark/commit/24367f23f77349a864da340573e39ab2168c5403
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18606
That's true. Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17633
@mallman we don't backport such risky changes to maintenance branches.
Those branches typically go through much less testing.
---
If your project is set up for it, you can reply to this email
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18586
Merging in master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18559
It'd be important to document what syntaxes are no longer allowed in the
JIRA ticket (and PR description), and we also highlight that in release notes.
---
If your project is set up for it, you can
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18559#discussion_r126072754
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2638,4 +2638,17 @@ class SQLQuerySuite extends QueryTest
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18540#discussion_r126016128
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/expressions/WindowSpec.scala ---
@@ -174,28 +191,22 @@ class WindowSpec private[sql
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18540#discussion_r126016260
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -805,4 +806,24 @@ object TypeCoercion
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18159#discussion_r126015755
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -314,21 +339,40 @@ object FileFormatWriter extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18549
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17633#discussion_r126013379
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -589,18 +590,40 @@ private[client] class Shim_v0_13 extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18307
high level change looks good to me.
@aray can you update the title / description of the PR and JIRA ticket?
cc @cloud-fan can you review this to make sure the implementation
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18494
cc @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18307#discussion_r125146093
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2205,37 +2205,151 @@ class Dataset[T] private[sql](
* // max 92.0
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18307#discussion_r125146112
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2205,37 +2205,151 @@ class Dataset[T] private[sql](
* // max 92.0
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18307#discussion_r125146063
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2205,37 +2205,151 @@ class Dataset[T] private[sql](
* // max 92.0
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18479
Funny tests actually passed. Maybe you guys can just review this.
cc @gengliangwang @gatorsmile @wzhfy
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18307
OK then let's use summary.
@aray want to do that update?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18307#discussion_r125095026
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2205,37 +2205,170 @@ class Dataset[T] private[sql](
* // max 92.0
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18334
Can the stats be updated incrementally?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18424
Have you done actual benchmarks to validate that this is a perf improvement?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18469
Can we minimize the change by just adding this method to PlanTest? It's not
that many lines of code.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18479
WIP - stat propagation code using mixin
## What changes were proposed in this pull request?
TBD
## How was this patch tested?
Should be covered by existing test cases.
You can merge
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17935
The reason I found out about this is because the one of the widely
circulated TPC-DS benchmark harness online uses this.
---
If your project is set up for it, you can reply to this email and have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17935
I don't think that argument is useful at all. For example, none of the
other databases support the DataFrame API. Does that mean few users will write
DataFrame code?
---
If your project is set up
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17935
Other committers please revert this change until we find a solution or
verify that almost no users write queries like this.
---
If your project is set up for it, you can reply to this email and have
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18307#discussion_r124932359
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2205,37 +2205,170 @@ class Dataset[T] private[sql](
* // max 92.0
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17935
Also the description / title is completely different from the JIRA ticket.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17935
Guys - isn't this API breaking?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18301
hey i didn't track super closely, but it is pretty important to show at
least one more digit, e.g. 1.7, rather than just 2.
---
If your project is set up for it, you can reply to this email and have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15821
In the future we should revert PRs that fail builds IMMEDIATELY. There is
no way we should've let the build be broken for days.
---
If your project is set up for it, you can reply to this email
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18429#discussion_r124457557
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -152,6 +153,19 @@ abstract class Optimizer
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18429#discussion_r124455032
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -152,6 +153,19 @@ abstract class Optimizer
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18429#discussion_r124455104
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -152,6 +153,19 @@ abstract class Optimizer
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18429#discussion_r124452275
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -152,6 +153,19 @@ abstract class Optimizer
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18429#discussion_r124177929
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/EliminateDistinceSuite.scala
---
@@ -0,0 +1,56 @@
+/*
+ * Licensed
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18368
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18395
Is this going to be exposed?
Either way, we should find something like spark.util.kvstore package rather
than a top level package.
---
If your project is set up for it, you can reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18042
Please let's not waste more time here. I don't think the gain is worth the
effort required (or even the discussions here).
---
If your project is set up for it, you can reply to this email and have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18377
Hm I'm not even sure if we should backport this in branch-2.2. Let's wait
and see ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18387
What about CheckAnalysis?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18387
hm should we do this? It'd make more sense to throw an analyzer error,
rather than some deep call stack that's coming from an operator.
---
If your project is set up for it, you can reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18377
Why did we backport this? This seems too risky.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18310
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18343
I was talking about the classname for the internal members.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18343
It's obvious it will reduce data size with custom serialization, since the
custom logic doesn't need to write the full classname out which the java
default one does.
I don't think Kryo knows
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18307
My worry is that now the default performance will be slow. Maybe this flag
can be off by default?
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user rxin reopened a pull request:
https://github.com/apache/spark/pull/18310
[SPARK-21103][SQL] QueryPlanConstraints should be part of LogicalPlan
## What changes were proposed in this pull request?
QueryPlanConstraints should be part of LogicalPlan, rather than
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18301
also the avg probe probably shouldn't be an integer. at least we should
show something like 1.9?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18301
yes but i just feel it is getting very long and verbose ..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18301
I'd shorten it to "avg hash probe". Also do we really need min, med, max?
Maybe just a single global avg?
---
If your project is set up for it, you can reply to this email and have your
re
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18301#discussion_r122128307
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala
---
@@ -573,8 +586,11 @@ private[execution] final class
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18299
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin closed the pull request at:
https://github.com/apache/spark/pull/18310
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18310
Closing for now, since @sameeragarwal said it might be useful in physical
planning in the future.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18310
This current includes all the changes from
https://github.com/apache/spark/pull/18299
But only the last commit matters.
---
If your project is set up for it, you can reply to this email
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18310
[SPARK-21103][SQL] QueryPlanConstraints should be part of LogicalPlan
## What changes were proposed in this pull request?
QueryPlanConstraints should be part of LogicalPlan, rather than QueryPlan
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18301
Can you put a screenshot of the UI up, for both join and aggregate?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18307
What's the perf impact here? My worry is that we will significantly slow
down describe ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18299#discussion_r122072883
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlanConstraints.scala
---
@@ -27,18 +27,20 @@ trait QueryPlanConstraints
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18299
The issue is that SparkSession might change the way they are wired and it's
not always the case that when we create a new thread, we set the thread local
conf.
---
If your project is set up
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18306
Is this safe to do @marmbrus ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18298
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18298#discussion_r122008512
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlanConstraints.scala
---
@@ -0,0 +1,206 @@
+/*
+ * Licensed
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18299
cc @wzhfy
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18299
Spark 21092
## What changes were proposed in this pull request?
It is really painful to not have configs in logical plan and expressions.
We had to add all sorts of hacks (e.g. pass SQLConf
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18299
Note that this patch is based on
https://github.com/apache/spark/pull/18298. Once we merge that one the diff
will become smaller.
---
If your project is set up for it, you can reply to this email
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/18298
[SPARK-21091][SQL] Move constraint code into QueryPlanConstraints
## What changes were proposed in this pull request?
This patch moves constraint related code into a separate trait
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18298#discussion_r121865658
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlanConstraints.scala
---
@@ -0,0 +1,206 @@
+/*
+ * Licensed
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r121729635
--- Diff: pom.xml ---
@@ -1871,6 +1872,25 @@
paranamer
${paranamer.version}
+
+org.apache.arrow
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18260
Why are we doing this? Isn't it better potentially for compression to store
them separately? We can also easily remove the offset for fixed length arrays.
---
If your project is set up for it, you
501 - 600 of 14826 matches
Mail list logo