wangfei created SPARK-1512:
--
Summary: improve spark sql to support table with more than 22
fields
Key: SPARK-1512
URL: https://issues.apache.org/jira/browse/SPARK-1512
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-1361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988900#comment-13988900
]
wangfei commented on SPARK-1361:
Hi Kousuke Saruta, this issue has already resolved.
[
https://issues.apache.org/jira/browse/SPARK-1361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei resolved SPARK-1361.
Resolution: Fixed
DAGScheduler throws NullPointerException occasionally
[
https://issues.apache.org/jira/browse/SPARK-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13994096#comment-13994096
]
wangfei commented on SPARK-1779:
i have fix it by throw an exception
wangfei created SPARK-2460:
--
Summary: Optimize SparkContext.hadoopFile api
Key: SPARK-2460
URL: https://issues.apache.org/jira/browse/SPARK-2460
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-2243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061620#comment-14061620
]
wangfei commented on SPARK-2243:
Spark Job Server may new multiple SparkContext in one jvm
[
https://issues.apache.org/jira/browse/SPARK-2243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14061767#comment-14061767
]
wangfei commented on SPARK-2243:
What do you mean of but it's something we could support
wangfei created SPARK-2608:
--
Summary: scheduler backend create executor launch command not
correctly
Key: SPARK-2608
URL: https://issues.apache.org/jira/browse/SPARK-2608
Project: Spark
Issue
wangfei created SPARK-2752:
--
Summary: spark sql cli should not exit when get a exception
Key: SPARK-2752
URL: https://issues.apache.org/jira/browse/SPARK-2752
Project: Spark
Issue Type: Bug
wangfei created SPARK-2925:
--
Summary: bin/spark-sql shell throw unrecognized option error when
set --driver-java-options
Key: SPARK-2925
URL: https://issues.apache.org/jira/browse/SPARK-2925
Project: Spark
wangfei created SPARK-3010:
--
Summary: fix redundant conditional
Key: SPARK-3010
URL: https://issues.apache.org/jira/browse/SPARK-3010
Project: Spark
Issue Type: Improvement
Components:
wangfei created SPARK-3038:
--
Summary: delete history server logs when there are too many logs
Key: SPARK-3038
URL: https://issues.apache.org/jira/browse/SPARK-3038
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3038:
---
Description:
enhance history server to delete logs automatically
1 use spark.history.deletelogs.enable to
wangfei created SPARK-3088:
--
Summary: fix test noisy message
Key: SPARK-3088
URL: https://issues.apache.org/jira/browse/SPARK-3088
Project: Spark
Issue Type: Improvement
Components: Build
[
https://issues.apache.org/jira/browse/SPARK-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3088:
---
Summary: noisy messages when run tests (was: fix test noisy message)
noisy messages when run tests
wangfei created SPARK-3125:
--
Summary: hive thriftserver test suite failure
Key: SPARK-3125
URL: https://issues.apache.org/jira/browse/SPARK-3125
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14102424#comment-14102424
]
wangfei commented on SPARK-3125:
for clisuite i print the error info, as follows:
wangfei created SPARK-3193:
--
Summary: output errer info when Process exitcode not zero
Key: SPARK-3193
URL: https://issues.apache.org/jira/browse/SPARK-3193
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei reopened SPARK-3193:
output errer info when Process exitcode not zero
wangfei created SPARK-3293:
--
Summary: yarn's web show SUCCEEDED when the driver throw a
exception in yarn-client
Key: SPARK-3293
URL: https://issues.apache.org/jira/browse/SPARK-3293
Project: Spark
wangfei created SPARK-3296:
--
Summary: spark-example should be run-example in head notation of
DenseKMeans and SparseNaiveBayes
Key: SPARK-3296
URL: https://issues.apache.org/jira/browse/SPARK-3296
Project:
wangfei created SPARK-3303:
--
Summary: SparkContextSchedulerCreationSuite test failed when mesos
native lib is set
Key: SPARK-3303
URL: https://issues.apache.org/jira/browse/SPARK-3303
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3303:
---
Description:
run test with the master branch with this command
sbt/sbt -Pyarn -Phadoop-2.3
[
https://issues.apache.org/jira/browse/SPARK-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3303:
---
Description:
run test with the master branch with this command
sbt/sbt -Phive test-only
[
https://issues.apache.org/jira/browse/SPARK-3010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei reopened SPARK-3010:
fix redundant conditional
-
Key: SPARK-3010
URL:
[
https://issues.apache.org/jira/browse/SPARK-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116191#comment-14116191
]
wangfei commented on SPARK-3088:
we should output stderr.
noisy messages when run tests
[
https://issues.apache.org/jira/browse/SPARK-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei closed SPARK-3088.
--
Resolution: Fixed
noisy messages when run tests
-
Key:
wangfei created SPARK-3322:
--
Summary: Log a ConnectionManager error when the application ends
Key: SPARK-3322
URL: https://issues.apache.org/jira/browse/SPARK-3322
Project: Spark
Issue Type: Bug
wangfei created SPARK-3323:
--
Summary: yarn website's Tracking UI links to the Standby RM
Key: SPARK-3323
URL: https://issues.apache.org/jira/browse/SPARK-3323
Project: Spark
Issue Type: Bug
wangfei created SPARK-3652:
--
Summary: upgrade spark sql hive version to 0.13.1
Key: SPARK-3652
URL: https://issues.apache.org/jira/browse/SPARK-3652
Project: Spark
Issue Type: Dependency upgrade
[
https://issues.apache.org/jira/browse/SPARK-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3652:
---
Description: now spark sql hive version is 0.12.0, compile with 0.13.1 will
get errors.
upgrade spark sql
wangfei created SPARK-3676:
--
Summary: jdk version lead to spark hql test suite error
Key: SPARK-3676
URL: https://issues.apache.org/jira/browse/SPARK-3676
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-3676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3676:
---
Summary: jdk version lead to spark sql test suite error (was: jdk version
lead to spark hql test suite
[
https://issues.apache.org/jira/browse/SPARK-3676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14146050#comment-14146050
]
wangfei commented on SPARK-3676:
hmm, i see, thanks for that.
jdk version lead to spark
wangfei created SPARK-3704:
--
Summary: the types not match adding value form spark row to hive
row in SparkSQLOperationManager
Key: SPARK-3704
URL: https://issues.apache.org/jira/browse/SPARK-3704
Project:
wangfei created SPARK-3705:
--
Summary: add case for VoidObjectInspector in inspectorToDataType
Key: SPARK-3705
URL: https://issues.apache.org/jira/browse/SPARK-3705
Project: Spark
Issue Type: Bug
wangfei created SPARK-3720:
--
Summary: support ORC in spark sql
Key: SPARK-3720
URL: https://issues.apache.org/jira/browse/SPARK-3720
Project: Spark
Issue Type: New Feature
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3720:
---
Description:
The Optimized Row Columnar (ORC) file format provides a highly efficient way to
store data on
wangfei created SPARK-3755:
--
Summary: Do not bind port 1 - 1024 to server in spark
Key: SPARK-3755
URL: https://issues.apache.org/jira/browse/SPARK-3755
Project: Spark
Issue Type: Bug
wangfei created SPARK-3756:
--
Summary: check exception is caused by an address-port collision
when binding properly
Key: SPARK-3756
URL: https://issues.apache.org/jira/browse/SPARK-3756
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3756:
---
Affects Version/s: 1.1.0
check exception is caused by an address-port collision when binding properly
[
https://issues.apache.org/jira/browse/SPARK-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3756:
---
Description: a tiny bug in method isBindCollision
Target Version/s: 1.2.0
check exception is
[
https://issues.apache.org/jira/browse/SPARK-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3756:
---
Summary: check exception is caused by an address-port collision properly
(was: check exception is caused by
wangfei created SPARK-3765:
--
Summary: add testing with sbt to doc
Key: SPARK-3765
URL: https://issues.apache.org/jira/browse/SPARK-3765
Project: Spark
Issue Type: Improvement
Affects Versions:
wangfei created SPARK-3766:
--
Summary: Snappy is also the default compression codec for
broadcast variables
Key: SPARK-3766
URL: https://issues.apache.org/jira/browse/SPARK-3766
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-3766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3766:
---
Component/s: Documentation
Snappy is also the default compression codec for broadcast variables
[
https://issues.apache.org/jira/browse/SPARK-3765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3765:
---
Component/s: Documentation
add testing with sbt to doc
---
Key:
wangfei created SPARK-3792:
--
Summary: enable JavaHiveQLSuite
Key: SPARK-3792
URL: https://issues.apache.org/jira/browse/SPARK-3792
Project: Spark
Issue Type: Improvement
Components: SQL
wangfei created SPARK-3793:
--
Summary: add para hiveconf when parse hive ql
Key: SPARK-3793
URL: https://issues.apache.org/jira/browse/SPARK-3793
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3793:
---
Summary: use hiveconf when parse hive ql (was: add para hiveconf when
parse hive ql)
use hiveconf when
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3793:
---
Summary: add para hiveconf when parse hive ql (was: use hiveconf when
parse hive ql)
add para hiveconf
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3793:
---
Summary: use hiveconf when parse hive ql (was: add para hiveconf when
parse hive ql)
use hiveconf when
wangfei created SPARK-3806:
--
Summary: minor bug and exception in CliSuite
Key: SPARK-3806
URL: https://issues.apache.org/jira/browse/SPARK-3806
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3806:
---
Summary: minor bug in CliSuite (was: minor bug and exception in CliSuite)
minor bug in CliSuite
wangfei created SPARK-3809:
--
Summary: make HiveThriftServer2Suite work correctly
Key: SPARK-3809
URL: https://issues.apache.org/jira/browse/SPARK-3809
Project: Spark
Issue Type: Bug
wangfei created SPARK-3826:
--
Summary: enable hive-thriftserver support hive-0.13.1
Key: SPARK-3826
URL: https://issues.apache.org/jira/browse/SPARK-3826
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei closed SPARK-3793.
--
Resolution: Fixed
should fix it in #2241
use hiveconf when parse hive ql
---
[
https://issues.apache.org/jira/browse/SPARK-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3826:
---
Affects Version/s: (was: 1.1.1)
1.1.0
enable hive-thriftserver support
wangfei created SPARK-3899:
--
Summary: wrong links in streaming doc
Key: SPARK-3899
URL: https://issues.apache.org/jira/browse/SPARK-3899
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-3935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3935:
---
Description:
There is a unused variable (count) in saveAsHadoopDataset function in
PairRDDFunctions.scala.
[
https://issues.apache.org/jira/browse/SPARK-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178183#comment-14178183
]
wangfei commented on SPARK-4001:
Thanks Sean Owen for explaining! Frequent itemset
[
https://issues.apache.org/jira/browse/SPARK-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178183#comment-14178183
]
wangfei edited comment on SPARK-4001 at 10/21/14 9:38 AM:
--
.
[
https://issues.apache.org/jira/browse/SPARK-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4001:
---
Comment: was deleted
(was: .)
Add Apriori algorithm to Spark MLlib
wangfei created SPARK-4041:
--
Summary: convert attributes names in table scan lowercase when
compare with relation attributes
Key: SPARK-4041
URL: https://issues.apache.org/jira/browse/SPARK-4041
Project:
wangfei created SPARK-4042:
--
Summary: append columns ids and names before broadcast
Key: SPARK-4042
URL: https://issues.apache.org/jira/browse/SPARK-4042
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description: appended columns ids and names will not broadcast because we
append them after create table
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description:
appended columns ids and names will not broadcast because we append them after
create table
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description:
appended columns ids and names will not broadcast because we append them after
creating table
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description:
appended columns ids and names will not broadcast because we append them after
create table
[
https://issues.apache.org/jira/browse/SPARK-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei resolved SPARK-3652.
Resolution: Fixed
upgrade spark sql hive version to 0.13.1
[
https://issues.apache.org/jira/browse/SPARK-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192701#comment-14192701
]
wangfei commented on SPARK-3322:
yes, to close this.
ConnectionManager logs an error
[
https://issues.apache.org/jira/browse/SPARK-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei closed SPARK-2460.
--
Resolution: Fixed
Optimize SparkContext.hadoopFile api
-
wangfei created SPARK-4177:
--
Summary: update build doc for JDBC/CLI already supporting hive 13
Key: SPARK-4177
URL: https://issues.apache.org/jira/browse/SPARK-4177
Project: Spark
Issue Type: Bug
wangfei created SPARK-4191:
--
Summary: move wrapperFor to HiveInspectors to reuse them
Key: SPARK-4191
URL: https://issues.apache.org/jira/browse/SPARK-4191
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4191:
---
Issue Type: Improvement (was: Bug)
move wrapperFor to HiveInspectors to reuse them
wangfei created SPARK-4225:
--
Summary: jdbc/odbc error when using maven build spark
Key: SPARK-4225
URL: https://issues.apache.org/jira/browse/SPARK-4225
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196693#comment-14196693
]
wangfei commented on SPARK-4225:
it seems there is some difference between using sbt and
[
https://issues.apache.org/jira/browse/SPARK-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196693#comment-14196693
]
wangfei edited comment on SPARK-4225 at 11/4/14 7:46 PM:
-
it seems
wangfei created SPARK-4237:
--
Summary: add Manifest File for Maven building
Key: SPARK-4237
URL: https://issues.apache.org/jira/browse/SPARK-4237
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14197715#comment-14197715
]
wangfei commented on SPARK-4237:
The title is not correct, should be generate right
[
https://issues.apache.org/jira/browse/SPARK-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4237:
---
Summary: Generate right Manifest File for maven building (was: add
Manifest File for Maven building)
[
https://issues.apache.org/jira/browse/SPARK-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4261:
---
Description:
Running with spark sql jdbc/odbc, the output will be
JackydeMacBook-Pro:spark1 jackylee$
wangfei created SPARK-4261:
--
Summary: make right version info for beeline
Key: SPARK-4261
URL: https://issues.apache.org/jira/browse/SPARK-4261
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4237:
---
Description:
Now build spark with maven produce the Manifest File of guava,
we should make right Manifest
wangfei created SPARK-4292:
--
Summary: incorrect result set in JDBC/ODBC
Key: SPARK-4292
URL: https://issues.apache.org/jira/browse/SPARK-4292
Project: Spark
Issue Type: Bug
Components:
wangfei created SPARK-4443:
--
Summary: Statistics bug for external table in spark sql hive
Key: SPARK-4443
URL: https://issues.apache.org/jira/browse/SPARK-4443
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4443:
---
Description: When table is external, the `totalSize` is always zero, which
will influence join
[
https://issues.apache.org/jira/browse/SPARK-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4443:
---
Description: When table is external, `totalSize` is always zero,
which will influence join
wangfei created SPARK-4449:
--
Summary: specify port range in spark
Key: SPARK-4449
URL: https://issues.apache.org/jira/browse/SPARK-4449
Project: Spark
Issue Type: Bug
Components: Spark
wangfei created SPARK-4552:
--
Summary: query for empty parquet table in spark sql hive get
IllegalArgumentException
Key: SPARK-4552
URL: https://issues.apache.org/jira/browse/SPARK-4552
Project: Spark
wangfei created SPARK-4553:
--
Summary: query for parquet table with string fields in spark sql
hive get binary result
Key: SPARK-4553
URL: https://issues.apache.org/jira/browse/SPARK-4553
Project: Spark
wangfei created SPARK-4554:
--
Summary: Set fair scheduler pool for JDBC client session in hive 13
Key: SPARK-4554
URL: https://issues.apache.org/jira/browse/SPARK-4554
Project: Spark
Issue Type: Bug
wangfei created SPARK-4559:
--
Summary: Adding support for ucase and lcase
Key: SPARK-4559
URL: https://issues.apache.org/jira/browse/SPARK-4559
Project: Spark
Issue Type: Bug
Components:
wangfei created SPARK-4574:
--
Summary: Adding support for defining schema in foreign DDL
commands.
Key: SPARK-4574
URL: https://issues.apache.org/jira/browse/SPARK-4574
Project: Spark
Issue Type:
wangfei created SPARK-4618:
--
Summary: Make foreign DDL commands options case-insensitive
Key: SPARK-4618
URL: https://issues.apache.org/jira/browse/SPARK-4618
Project: Spark
Issue Type: Improvement
wangfei created SPARK-4673:
--
Summary: Optimizing limit using coalesce
Key: SPARK-4673
URL: https://issues.apache.org/jira/browse/SPARK-4673
Project: Spark
Issue Type: Bug
Components: SQL
wangfei created SPARK-4695:
--
Summary: Get result using executeCollect in spark sql
Key: SPARK-4695
URL: https://issues.apache.org/jira/browse/SPARK-4695
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4695:
---
Issue Type: Improvement (was: Bug)
Get result using executeCollect in spark sql
wangfei created SPARK-4845:
--
Summary: Adding a parallelismRatio to control the partitions num
of shuffledRDD
Key: SPARK-4845
URL: https://issues.apache.org/jira/browse/SPARK-4845
Project: Spark
wangfei created SPARK-4861:
--
Summary: Refactory command in spark sql
Key: SPARK-4861
URL: https://issues.apache.org/jira/browse/SPARK-4861
Project: Spark
Issue Type: Improvement
1 - 100 of 138 matches
Mail list logo