Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/1893#issuecomment-51864003
0.98 is a more powerful HBase release version. +1 for Upgrade HBase
dependency to 0.98 :+1:
---
If your project is set up for it, you can reply to this email and have
GitHub user haosdent opened a pull request:
https://github.com/apache/spark/pull/194
Add spark-hbase.
RT
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/haosdent/spark SPARK-1127
Alternatively you can review and apply
Github user haosdent commented on a diff in the pull request:
https://github.com/apache/spark/pull/194#discussion_r10862614
--- Diff:
external/hbase/src/main/scala/org/apache/spark/nosql/hbase/HBaseUtils.scala ---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software
Github user haosdent commented on a diff in the pull request:
https://github.com/apache/spark/pull/194#discussion_r10862631
--- Diff:
external/hbase/src/main/scala/org/apache/spark/nosql/hbase/SparkHBaseWriter.scala
---
@@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-38357324
@mridulm @tedyu I update the pull request as your advice. Could you help me
review it again? Thank you very much.
---
If your project is set up for it, you can reply
Github user haosdent commented on a diff in the pull request:
https://github.com/apache/spark/pull/194#discussion_r10863237
--- Diff:
external/hbase/src/test/scala/org/apache/spark/nosql/hbase/HBaseSuite.scala ---
@@ -0,0 +1,48 @@
+package org.apache.spark.nosql.hbase
Github user haosdent commented on a diff in the pull request:
https://github.com/apache/spark/pull/194#discussion_r10863248
--- Diff:
external/hbase/src/main/scala/org/apache/spark/nosql/hbase/SparkHBaseWriter.scala
---
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache
Github user haosdent commented on a diff in the pull request:
https://github.com/apache/spark/pull/194#discussion_r10863265
--- Diff:
external/hbase/src/main/scala/org/apache/spark/nosql/hbase/HBaseUtils.scala ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-38404602
I think a nicer way to do this would be to go through a SchemaRDD (which
is a new feature recently merged into Spark) or even a Scala case class or
Scala tuples
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-38523893
Let me know if you have any questions about how you could use this
functionality.
Thank you very much! :-)
---
If your project is set up for it, you can reply
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-39667634
@marmbrus @pwendell Could you help me review this pull request again? I
have already add `saveAsHBaseTable(rdd: SchemaRDD ...)` method. Thank you in
advance
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-39718976
Quiet confused about `InputStreamsSuite`. I pass it in my local machine.
And this is a case from `streaming` module, I think my pull request didn't have
any related code
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-39719475
The error from `https://travis-ci.org/apache/spark/builds/22424147`. I
would trigger travis again after others fix that bug on master.
---
If your project is set up
GitHub user haosdent opened a pull request:
https://github.com/apache/spark/pull/346
Make streaming/test pass.
From this [commit][1], `SparkBuild.scala` add a new javaOptions
`-Dsun.io.serialization.extendedDebugInfo=true` in Test. This make
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/346#issuecomment-39759846
After merge this pull request #295 . Travis build failed.
https://travis-ci.org/apache/spark/jobs/22424149
---
If your project is set up for it, you can reply
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/346#issuecomment-39759960
The complete failure log from travis:
```
[info] - actor input stream *** FAILED *** (8 seconds, 991 milliseconds)
[info] 0 did not equal 9
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/346#issuecomment-39760589
I believe I already filled a JIRA for it.
@marmbrus Could you post the JIRA link about it? If that test case in
`InputStreamsSuite` is flakey, maybe we should
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/346#issuecomment-39802109
@marmbrus Thanks for your reply. One more question, why this
build(https://travis-ci.org/apache/spark/jobs/22151828) `exceeded 50.0 minutes.`
---
If your project is set
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/346#issuecomment-39803998
@marmbrus IMHO, I am afraid this option trigger potential exist bugs
related serialization. After the commit, all builds in travis have two problem:
1. Failed
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/346#issuecomment-39804422
Thank you for your quick reply. I should do some learning about this. Thank
you again!
---
If your project is set up for it, you can reply to this email and have your
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-39821743
@pwendell @marmbrus Could you help me review this pull request again? Thank
you in advance.
---
If your project is set up for it, you can reply to this email and have
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-39824171
OK, I see. Thank you for your reply. :-) @pwendell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user haosdent closed the pull request at:
https://github.com/apache/spark/pull/346
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-40618129
@pwendell Sorry for disturb you again. I saw 0.9.1 have been released,
could you help me review this pull request again? Thank you very much.
---
If your project is set
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-40622251
@pwendell Oh, sorry, I mistake what you say. The developing of Spark is
quite fast
---
If your project is set up for it, you can reply to this email and have your
reply
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-44720047
ping @pwendell , 1.0 have been released~
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user haosdent commented on the pull request:
https://github.com/apache/spark/pull/194#issuecomment-65404868
Thank you very much. :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user haosdent closed the pull request at:
https://github.com/apache/spark/pull/194
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user haosdent commented on the issue:
https://github.com/apache/spark/pull/11887
@HyukjinKwon @skonto Sorry for the delay, MESOS-4992 have been fixed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
29 matches
Mail list logo