Github user npanj commented on the pull request:
https://github.com/apache/spark/pull/901#issuecomment-44478613
@rxin - Allowing arbitrary data types sounds like a good idea. I actually
tried to to do something like:
--
def edgeListFile[@specialized(Long, Int, Double) ED: ClassTag](
sc: SparkContext,
path: String,
canonicalOrientation: Boolean = false,
minEdgePartitions: Int = 1)
: Graph[Int, ED] =
and
if (lineArray.length >= 3) lineArray(2).asInstanceOf[ED]
else 1.asInstanceOf[ED]
--
But I am running into this error:
--
java.lang.ClassCastException: java.lang.String cannot be cast to
java.lang.Integer
at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:106)
--
I guess I will have to handle primitive types separately?
(I am quite new to Scala) Are there any known patterns to handle casting?
If you can point me to a code snippet within spark codebase(or else where )
that will be great.
Also do you guys follow any scala code style guide? ; so that I can follow
that for future patches
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---