Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/455#discussion_r13414996
  
    --- Diff: 
examples/src/main/scala/org/apache/spark/examples/CassandraCQLTest.scala ---
    @@ -32,6 +32,23 @@ import org.apache.hadoop.mapreduce.Job
     
     import org.apache.spark.{SparkConf, SparkContext}
     import org.apache.spark.SparkContext._
    +import org.apache.spark.api.python.Converter
    +
    +class CassandraCQLKeyConverter extends Converter {
    +  import collection.JavaConversions._
    +  override def convert(obj: Any) = {
    +    val result = obj.asInstanceOf[java.util.Map[String, ByteBuffer]]
    +    mapAsJavaMap(result.mapValues(bb => ByteBufferUtil.toInt(bb)))
    +  }
    +}
    +
    +class CassandraCQLValueConverter extends Converter {
    +  import collection.JavaConversions._
    +  override def convert(obj: Any) = {
    +    val result = obj.asInstanceOf[java.util.Map[String, ByteBuffer]]
    +    mapAsJavaMap(result.mapValues(bb => ByteBufferUtil.string(bb)))
    +  }
    +}
    --- End diff --
    
    Move these to separate source files since they're not used in the Scala 
example. Maybe we can even have a "pythonConverters" subpackage of 
org.apache.spark.examples that says these are used in the .py examples.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to