Github user velvia commented on the pull request:
https://github.com/apache/incubator-spark/pull/222#issuecomment-34659521
Hey guys,
Sorry, should update this PR. The current plan is to move the job server
into a spark-contrib project, but the location is unknown
Github user velvia commented on the pull request:
https://github.com/apache/incubator-spark/pull/576#issuecomment-34715532
My concern with this is that Parquet is typically used for high performance
OLAP queries, and changing it to JSON makes it much slower. Out of curiosity,
I have
Github user velvia commented on the pull request:
https://github.com/apache/incubator-spark/pull/299#issuecomment-35003997
anybody want to have a look again?
I'm also open to exploring a slightly different idea, which is to load the
SparkContext with a custom, URLL
Github user velvia commented on the pull request:
https://github.com/apache/incubator-spark/pull/576#issuecomment-35039082
Uri,
What you can do is, in Scala, have an implicit conversion to your own
class, effectively extending SparkContext yourself. We do this for our own
Github user velvia commented on the pull request:
https://github.com/apache/incubator-spark/pull/222#issuecomment-35598520
Sourav, not yet. Need to think about how to enable spark streaming support
(I'm going to move this to its own repo, at which point pull requests for