Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7853#discussion_r39876553
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -96,7 +96,7 @@ class SparkContext(config: SparkConf) extends Logging
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/8749#discussion_r39883770
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -264,6 +264,7 @@ class SparkContext(config: SparkConf) extends Logging
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7853#discussion_r39880833
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -96,7 +96,7 @@ class SparkContext(config: SparkConf) extends Logging
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/8822
Expose SparkContext#stopped flag with @DeveloperApi
See this thread: http://search-hadoop.com/m/q3RTtqvncy17sSTx1
We should expose this flag to developers
@andrewor14
You can merge
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/8703
Check partitionId's range in ExternalSorter#spill()
See this thread for background:
http://search-hadoop.com/m/q3RTt0rWvIkHAE81
We should check the range of partition Id and provide
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/8259
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/8218#discussion_r38595554
--- Diff:
network/common/src/main/java/org/apache/spark/network/server/OneForOneStreamManager.java
---
@@ -109,15 +111,34 @@ public void connectionTerminated
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/8259#issuecomment-132006028
@rxin
I don't use GraphX.
But I think including Float in these classes would boost performance where
Float is used.
---
If your project is set up for it, you
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/8259
Include Float in @specialized annotation
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark master
Alternatively you can review
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/8259#issuecomment-132063810
Test failures don't seem to be related to the PR.
Not sure whether the following is a test or not:
https://amplab.cs.berkeley.edu/jenkins/job
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7802#discussion_r36590599
--- Diff:
sql/core/src/main/resources/META-INF/services/org.apache.spark.sql.sources.DataSourceRegister
---
@@ -0,0 +1,3
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/7919
[BUILD] Remove dependency reduced POM hack
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark master
Alternatively you can review
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/7919#issuecomment-127478076
The PySpark test failure should not be related to the change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7882#discussion_r36114103
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -614,8 +614,9 @@ object DateTimeUtils
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7882#discussion_r36103906
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -614,8 +614,9 @@ object DateTimeUtils
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7532#discussion_r36038428
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
---
@@ -152,6 +152,34 @@ private[spark] class
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7532#discussion_r36038415
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -804,6 +827,87 @@ private[master] class Master
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/7756#issuecomment-126369191
Anything I can do to move this forward ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/7756#issuecomment-126534789
Anything I need to do for this issue ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/7756#issuecomment-126117214
In CoarseGrainedSchedulerBackend.scala, around line 281:
```
override def stop() {
stopExecutors()
```
---
If your project is set up for it, you can
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/7756#issuecomment-126087585
Thanks for the quick reviews.
See if rev2 is better.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/7756
Clear Active SparkContext in stop() method using finally
See 'stopped SparkContext remaining active' thread on mailing list for
relevant details
You can merge this pull request into a Git repository
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/7756#issuecomment-126122944
Doesn't seem to matter, considering we have the following at the beginning
of stop():
```
if (!stopped.compareAndSet(false, true)) {
logInfo
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/7466
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/7466#issuecomment-122286140
Covered by #7421
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/7466
Make MetastoreRelation#hiveQlPartitions lazy val
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark master
Alternatively you can review
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6444#discussion_r34847593
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillWriter.java
---
@@ -0,0 +1,146 @@
+/*
+ * Licensed
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6444#discussion_r34849488
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillMerger.java
---
@@ -0,0 +1,91 @@
+/*
+ * Licensed
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6444#discussion_r34850864
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillWriter.java
---
@@ -0,0 +1,146 @@
+/*
+ * Licensed
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6444#discussion_r34847659
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -0,0 +1,282 @@
+/*
+ * Licensed
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6444#discussion_r34849233
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSortDataFormat.java
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6444#discussion_r34848070
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -0,0 +1,282 @@
+/*
+ * Licensed
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/7405#discussion_r34623959
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/AbstractScalaRowIterator.scala
---
@@ -17,11 +17,14 @@
package org.apache.spark.sql
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6205#discussion_r33873268
--- Diff: core/src/main/scala/org/apache/spark/rpc/RpcEnv.scala ---
@@ -182,3 +184,109 @@ private[spark] object RpcAddress {
RpcAddress(host, port
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6607#discussion_r33632077
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/ReceiverTracker.scala
---
@@ -271,6 +273,41 @@ class ReceiverTracker(ssc
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6793#discussion_r32379130
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ArithmeticExpressionSuite.scala
---
@@ -69,6 +69,7 @@ class
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6793#discussion_r32379114
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ArithmeticExpressionSuite.scala
---
@@ -69,6 +69,7 @@ class
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6793#issuecomment-111724468
I am trying to figure out how checkEvaluation should be used for the new
test.
protected def checkEvaluation(
expression: Expression, expected: Any
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6793#issuecomment-111751576
Looking at ArithmeticExpressionSuite.scala, it has some checks in the
following form:
checkDoubleEvaluation(c1 - c2, (-0.9 +- 0.001), row)
This seems
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/6059
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/6793
Fix NullPointerException with functions.rand()
This PR fixes the problem reported by Justin Yip in the thread
'NullPointerException with functions.rand()'
You can merge this pull request into a Git
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6793#issuecomment-111668224
I looked at UnsafeFixedWidthAggregationMapSuite.scala in expressions
package.
Is RandomSuite.scala going to test Rand and Randn only ?
A bit more hint
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6793#issuecomment-111655373
Mind telling me which suite the new test should be added to ?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6793#issuecomment-111657774
At first glance, none of the test suites under
sql/catalyst/src/test//scala/org/apache/spark/sql seems proper for the new test.
---
If your project is set up for it, you
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6508#discussion_r31374404
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -101,6 +104,11 @@ private[spark] class ExecutorAllocationManager
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6390#discussion_r30953036
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -50,7 +50,13 @@ class KryoSerializer(conf: SparkConf)
with Logging
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/6390
Kryo buffer size configured in mb should be properly supported
@JoshRosen
This PR tried to fix the issue reported by Debasish Das under 'Kryo option
changed' thread
You can merge this pull
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6390#discussion_r30953067
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -50,7 +50,13 @@ class KryoSerializer(conf: SparkConf)
with Logging
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6390#discussion_r30955789
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -62,6 +62,10 @@ class KryoSerializerSuite extends FunSuite
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/6246
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/6246
Fix compilation error due to buildScan() being final
This PR fixes the following compilation error:
[error]
/home/jenkins/workspace/Spark-Master-Maven-with-YARN/HADOOP_PROFILE/hadoop-2.4
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/6194#discussion_r30459211
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/package.scala ---
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6059#issuecomment-101334832
Ran suite two more times with hadoop-2.4 profile - DriverSuite passed.
Running suite with hadoop-2.3 profile now.
---
If your project is set up for it, you can
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6059#issuecomment-101345262
DriverSuite passed with hadoop-2.3 profile as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/6059
SPARK-7355 FlakyTest - o.a.s.DriverSuite
The test passed locally:
{code}
^[[32mDriverSuite:^[[0m
^[[32m- driver should exit after finishing without cleanup (SPARK-530)^[[0m
^[[36mRun
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6059#issuecomment-101007216
From
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/32407/consoleFull
:
{code}
[info] DriverSuite:
[info] - driver should exit after
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6059#issuecomment-101024546
I am running test suite locally to try to reproduce the test failure.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6059#issuecomment-101073092
Ran test suite 3 times where DriverSuite passed every time.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6031#issuecomment-100551775
We do have green builds now:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-Master-Maven-with-YARN/2115/HADOOP_PROFILE=hadoop-2.4,label=centos/console
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/6028
Upgrade version of jackson-databind in sql/core/pom.xml
Currently version of jackson-databind in sql/core/pom.xml is 2.3.0
This is older than the version specified in root pom.xml
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6028#issuecomment-100501599
This PR should get rid of the following test failure:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/6028#issuecomment-100501682
@rxin
Please take a look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/6031
Reference fasterxml.jackson.version in sql/core/pom.xml
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark master
Alternatively you can
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5897#issuecomment-100050095
bq. I'm really pedantic
I like that :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5897#issuecomment-100015784
SPARK-7450 has been filed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5897#issuecomment-99621943
[error]
/home/jenkins/workspace/SparkPullRequestBuilder@2/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveWindowFunctionQuerySuite.scala:769:
not found
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/5897#discussion_r29638709
--- Diff:
unsafe/src/main/java/org/apache/spark/unsafe/bitset/BitSetMethods.java ---
@@ -71,7 +72,13 @@ public static boolean isSet(Object baseObject, long
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5897#issuecomment-98897971
From failed test:
bq. [error] oro#oro;2.0.8!oro.jar origin location must be absolute:
file:/home/jenkins/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar
Pretty sure
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/5897#discussion_r29638486
--- Diff:
unsafe/src/main/java/org/apache/spark/unsafe/bitset/BitSetMethods.java ---
@@ -71,7 +72,13 @@ public static boolean isSet(Object baseObject, long
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/5897#discussion_r29639524
--- Diff:
unsafe/src/main/java/org/apache/spark/unsafe/bitset/BitSetMethods.java ---
@@ -71,7 +72,13 @@ public static boolean isSet(Object baseObject, long
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/5897
Use UNSAFE.getLong() to speed up BitSetMethods#anySet()
@JoshRosen
Please take a look
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5897#issuecomment-98913797
From
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/31809/testReport/junit/org.apache.spark.deploy/SparkSubmitSuite
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/5874
Set null as default value for TaskContextImpl#taskMemoryManager
@JoshRosen
Please take a look.
You can merge this pull request into a Git repository by running:
$ git pull https
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5874#issuecomment-98562614
From what I can tell (e.g. JavaAPISuite), null taskMemoryManager is passed
in existing tests.
This doesn't seem to increase the chance of NPE.
I agree
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5874#issuecomment-98568325
I found two places where taskMemoryManager is used:
DAGScheduler#runLocallyWithinThread()
Executor#run()
taskMemoryManager is constructed in both places
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/5874
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5874#issuecomment-98571954
Right.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/5725#issuecomment-97511381
sup**[sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeFixedWidthAggregationMap.java,
line 133
\[r14
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/5704#discussion_r29382639
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -281,21 +285,30 @@ private[spark] class ExecutorAllocationManager
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/5673
SPARK-7107 Add parameter for zookeeper.znode.parent to hbase_inputformat...
py
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/4836
SPARK-6085 Increase default value for memory overhead
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark master
Alternatively you can
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/4836#issuecomment-76551769
Thanks for the reminder, updated accordingly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/4794
SPARK-6045 RecordWriter should be checked against null in PairRDDFunctio...
...ns#saveAsNewAPIHadoopDataset
You can merge this pull request into a Git repository by running:
$ git pull https
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/4021#discussion_r25209770
--- Diff: core/src/main/scala/org/apache/spark/Accumulators.scala ---
@@ -320,7 +334,13 @@ private[spark] object Accumulators {
def add(values: Map[Long
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/4021#issuecomment-75666737
Not yet.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/3115#issuecomment-63339550
Thanks Sean for chiming in.
bq. Most people wouldn't depend on the server
Mapreduce related classes, e.g. TableMapReduceUtil, are in hbase-server
---
If your
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/3286#issuecomment-63357810
@pwendell
I logged SPARK-4455
Let me know if I need to create another pull request due to the merge
---
If your project is set up for it, you can reply
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/3286#issuecomment-63381658
These annotations are used to indicate the audience / stability of HBase
APIs.
They're not needed by HBase clients.
---
If your project is set up for it, you can
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/3286
Exclude dependency on hbase-annotations module
@pwendell
Please take a look
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/1893#issuecomment-61847146
I will create another pull request since my local workspace has become
stale.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/1893
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/3115
SPARK-1297 Upgrade HBase dependency to 0.98
@pwendell @rxin
Please take a look
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tedyu/spark
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/3115#issuecomment-61866705
hbase 0.98 has been declared stable release.
Since hbase 0.94 is not modularized, compilation against 0.94 or earlier
releases wouldn't be supported.
---
If your
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/1893
SPARK-1297 Upgrade HBase dependency to 0.98
Two profiles are added to examples/pom.xml :
hbase-hadoop1 (default)
hbase-hadoop2
I verified that compilation passes with either profile
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16066911
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+ idhbase
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16066969
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+ idhbase
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16067144
--- Diff: examples/pom.xml ---
@@ -110,36 +143,52 @@
version${project.version}/version
/dependency
dependency
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16068521
--- Diff: examples/pom.xml ---
@@ -110,36 +143,52 @@
version${project.version}/version
/dependency
dependency
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/1893#discussion_r16068638
--- Diff: examples/pom.xml ---
@@ -45,6 +45,39 @@
/dependency
/dependencies
/profile
+profile
+ idhbase
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/194#discussion_r10862801
--- Diff:
external/hbase/src/main/scala/org/apache/spark/nosql/hbase/HBaseUtils.scala ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software
401 - 500 of 503 matches
Mail list logo