Hey, i don't think that's the issue, foreach is called on 'results' which is a DStream of floats, so naturally it passes RDDs to its function.
And either way, changing the code in the first mapper to comment out the map reduce process on the RDD Float f = 1.0f; //nnRdd.map(new Function<NeuralNet, Float>() { // /** // * // */ // private static final long serialVersionUID = 876245667956566483L; // // @Override // public Float call(NeuralNet nn) throws Exception { // // return 1.0f; // } // }).reduce(new Function2<Float, Float, Float>() { // // /** // * // */ // private static final long serialVersionUID = 5461230777841578072L; // // @Override // public Float call(Float left, Float right) throws Exception { // // return left + right; // } // }); return Arrays.asList(f); works as expected, so it's most likely running that RDD.map().reduce() that's the issue somehow, i just don't get why it works when there's a .print() and the end and not a .foreach() -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Odd-error-when-using-a-rdd-map-within-a-stream-map-tp14551p14652.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org