.
Any pointers?
Many thanks,
Alex
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/scala-version-of-flink-mongodb-example-tp8971p11489.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Hi Frank,
I didn't tried to run the code, but this does not show a compiler error in
IntelliJ:
> input.map( mapfunc2 _ )
Decomposing the Tuple2 into two separate arguments does only work with
Scala's pattern matching technique (this is the second approach I posted).
The Java API is not capable o
Hello Fabian,
Thanks, your solution works indeed. however, i don't understand why.
When i replace the lambda by an explicit function
def mapfunc2(pair: Tuple2[BSONWritable, BSONWritable]) : String = {
return pair._1.toString
}
input.map mapfunc2
i get the error below, which seemingly indica
Hi Frank,
input should be of DataSet[(BSONWritable, BSONWritable)], so a
Tuple2[BSONWritable, BSONWritable], right?
Something like this should work:
input.map( pair => pair._1.toString )
Pair is a Tuple2[BSONWritable, BSONWritable], and pair._1 accesses the key
of the pair.
Alternatively you c
Hello,
i'm new to flink, and i'm trying to get a mongodb hadoop input format
working in scala. However, i get lost in the scala generics system ...
could somebody help me ?
Code is below, neither version works (compile error at the "map" call),
either because of method not applicable either becau