[ 
https://issues.apache.org/jira/browse/FLINK-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jelmer Kuperus updated FLINK-8828:
----------------------------------
    Description: 
A collect function is a method that takes a Partial Function as its parameter 
and applies it to all the elements in the collection to create a new collection 
which satisfies the Partial Function.

It can be found on all [core scala collection 
classes|http://www.scala-lang.org/api/2.9.2/scala/collection/TraversableLike.html]
 as well as on spark's [rdd 
interface|https://spark.apache.org/docs/2.2.0/api/scala/index.html#org.apache.spark.rdd.RDD]

To understand its utility imagine the following scenario :

Given a DataStream that produces events of type _Purchase_ and _View_ 
Transform this stream into a stream of purchase amounts over 1000 euros.

Currently an implementation might look like
{noformat}
val x = dataStream
  .filter(_.isInstanceOf[Purchase])
  .map(_.asInstanceOf[Purchase])
  .filter(_.amount > 1000)
  .map(_.amount){noformat}
Or alternatively you could do this
{noformat}
dataStream.flatMap(_ match {
  case p: Purchase if p.amount > 1000 => Some(p.amount)
  case _ => None
}){noformat}
But with collect implemented it could look like
{noformat}
dataStream.collect {
  case p: Purchase if p.amount > 1000 => p.amount
}{noformat}
 

Which is a lot nicer to both read and write

  was:
A collect function is a method that takes a Partial Function as its parameter 
and applies it to all the elements in the collection to create a new collection 
which satisfies the Partial Function.

It can be found on all [core scala collection 
classes|http://www.scala-lang.org/api/2.9.2/scala/collection/TraversableLike.html]
 as well as on spark's [rdd 
interface|https://spark.apache.org/docs/2.2.0/api/scala/index.html#org.apache.spark.rdd.RDD]

To understand its utility imagine the following scenario :

You have a DataStream that produces events of type _Purchase_ and _View_ 
 You would like to transform this stream into a stream of purchase amounts over 
1000 euros.

Currently an implementation might look like
{noformat}
val x = dataStream
  .filter(_.isInstanceOf[Purchase])
  .map(_.asInstanceOf[Purchase])
  .filter(_.amount > 1000)
  .map(_.amount){noformat}
Or alternatively you could do this
{noformat}
dataStream.flatMap(_ match {
  case p: Purchase if p.amount > 1000 => Some(p.amount)
  case _ => None
}){noformat}
But with collect implemented it could look like
{noformat}
dataStream.collect {
  case p: Purchase if p.amount > 1000 => p.amount
}{noformat}
 

Which is a lot nicer to both read and write


> Add collect method to DataStream / DataSet scala api
> ----------------------------------------------------
>
>                 Key: FLINK-8828
>                 URL: https://issues.apache.org/jira/browse/FLINK-8828
>             Project: Flink
>          Issue Type: Improvement
>          Components: Core, DataSet API, DataStream API, Scala API
>    Affects Versions: 1.4.0
>            Reporter: Jelmer Kuperus
>            Priority: Major
>
> A collect function is a method that takes a Partial Function as its parameter 
> and applies it to all the elements in the collection to create a new 
> collection which satisfies the Partial Function.
> It can be found on all [core scala collection 
> classes|http://www.scala-lang.org/api/2.9.2/scala/collection/TraversableLike.html]
>  as well as on spark's [rdd 
> interface|https://spark.apache.org/docs/2.2.0/api/scala/index.html#org.apache.spark.rdd.RDD]
> To understand its utility imagine the following scenario :
> Given a DataStream that produces events of type _Purchase_ and _View_ 
> Transform this stream into a stream of purchase amounts over 1000 euros.
> Currently an implementation might look like
> {noformat}
> val x = dataStream
>   .filter(_.isInstanceOf[Purchase])
>   .map(_.asInstanceOf[Purchase])
>   .filter(_.amount > 1000)
>   .map(_.amount){noformat}
> Or alternatively you could do this
> {noformat}
> dataStream.flatMap(_ match {
>   case p: Purchase if p.amount > 1000 => Some(p.amount)
>   case _ => None
> }){noformat}
> But with collect implemented it could look like
> {noformat}
> dataStream.collect {
>   case p: Purchase if p.amount > 1000 => p.amount
> }{noformat}
>  
> Which is a lot nicer to both read and write



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to