[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2018-08-02 Thread AlbertPlaPlanas
Github user AlbertPlaPlanas commented on the issue: https://github.com/apache/spark/pull/16486 Was this ever implemented? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands,

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-12-14 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/16486 Can one of the admins verify this patch? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-05-01 Thread leonfl
Github user leonfl commented on the issue: https://github.com/apache/spark/pull/16486 @mrjrdnthms ,Yes, your understand is correct, in scala it like this: ``` val rows: RDD[Row] = df.rdd.map( rowIn => { // handle the rowIn and return a Row

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-04-30 Thread mrjrdnthms
Github user mrjrdnthms commented on the issue: https://github.com/apache/spark/pull/16486 @leonfl The python udf is too slow for my task. By "mappatition and row iterator" do you mean doing the transformation on the RDD directly instead of the dataframe? Sorry for the basic question.

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-04-24 Thread leonfl
Github user leonfl commented on the issue: https://github.com/apache/spark/pull/16486 @mrjrdnthms , this is implemented by UDF, which will run a little bit slower, but easy to use. If you want it run faster, you can implement it using mappatition and row iterator instead of udf.

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-04-19 Thread mrjrdnthms
Github user mrjrdnthms commented on the issue: https://github.com/apache/spark/pull/16486 I could use this. I have udf to pick out single values I want but my implementation is slow: here is my python udf: `probTrue_udf = udf(lambda value: value[1].item(), FloatType())` I was

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-01-08 Thread leonfl
Github user leonfl commented on the issue: https://github.com/apache/spark/pull/16486 @jkbradley, Could you also help to check this patch cause you are familiar with this defect, Thanks. --- If your project is set up for it, you can reply to this email and have your reply appear on

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-01-06 Thread leonfl
Github user leonfl commented on the issue: https://github.com/apache/spark/pull/16486 @mengxr, could you help to check this patch? Thanks --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-01-06 Thread leonfl
Github user leonfl commented on the issue: https://github.com/apache/spark/pull/16486 It's a method like VectorAssembler, which make user easy to handle single fields and vector field. Pull out a single field is easy, but for all single fields in a vector, it still need some code

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-01-06 Thread srowen
Github user srowen commented on the issue: https://github.com/apache/spark/pull/16486 I don't think this is worth adding. It's pretty easy to pull out a single fiedl from a vector already. --- If your project is set up for it, you can reply to this email and have your reply appear

[GitHub] spark issue #16486: [SPARK-13610][ML] Create a Transformer to disassemble ve...

2017-01-05 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/16486 Can one of the admins verify this patch? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this