[ 
https://issues.apache.org/jira/browse/SPARK-27249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17086029#comment-17086029
 ] 

Nick Afshartous commented on SPARK-27249:
-----------------------------------------

[~enrush] Hi Everett,

Using the {{Iterator}} approach would seem to deviate from the existing 
{{Transformer}} contract.  More specifically, the {{transform}} function should 
output a new {{DataFrame}} object.  What I would propose is adding a new 
function {{Transformer.compose}} to allow the composition of {{Transformers}}.  

{code}
  def compose(other: Transformer): Transformer = {
    new Transformer {
      override def transform(dataset: Dataset[_]): DataFrame = {
        other.transform(this.transform(dataset));
      }
...
{code}

Then one could {{compose}} {{Transformers}} which effectively would enable 
multi-column transformations.  

{code}
 val dataFrame = ...
 val transformers = List(transformer1, transformer2, transformer3)
 val multiColumnTransformer = transformers.reduce((x, y) => x.compose(y))

multiColumnTransformer.transform(dataFrame)
{code}

I'd be happy to submit a PR if this meets your requirments.

> Developers API for Transformers beyond UnaryTransformer
> -------------------------------------------------------
>
>                 Key: SPARK-27249
>                 URL: https://issues.apache.org/jira/browse/SPARK-27249
>             Project: Spark
>          Issue Type: New Feature
>          Components: ML
>    Affects Versions: 3.1.0
>            Reporter: Everett Rush
>            Priority: Minor
>              Labels: starter
>         Attachments: Screen Shot 2020-01-17 at 4.20.57 PM.png
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> It would be nice to have a developers' API for dataset transformations that 
> need more than one column from a row (ie UnaryTransformer inputs one column 
> and outputs one column) or that contain objects too expensive to initialize 
> repeatedly in a UDF such as a database connection. 
>  
> Design:
> Abstract class PartitionTransformer extends Transformer and defines the 
> partition transformation function as Iterator[Row] => Iterator[Row]
> NB: This parallels the UnaryTransformer createTransformFunc method
>  
> When developers subclass this transformer, they can provide their own schema 
> for the output Row in which case the PartitionTransformer creates a row 
> encoder and executes the transformation. Alternatively the developer can set 
> output Datatype and output col name. Then the PartitionTransformer class will 
> create a new schema, a row encoder, and execute the transformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to