[ 
https://issues.apache.org/jira/browse/SPARK-9062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14631027#comment-14631027
 ] 

yuhao yang commented on SPARK-9062:
-----------------------------------

This jira is trying to chain the tokenizer and Word2vec (and countVectorizer 
and possible stopwordsRemover)
{code}
 val texts = sc.textFile("/home/yuhao/workspace/DocSet/20_newsgroups/train.txt")
    val df = texts.toDF("text")
    val tokenizer = new RegexTokenizer()
      .setInputCol("text")
      .setOutputCol("tokens")
      .setPattern("\\W+")
    val w2v = new Word2Vec()
      .setInputCol("tokens")
      .setOutputCol("vector")
    val pipeline = new Pipeline()
      .setStages(Array(tokenizer, w2v))

    val w2vModel = pipeline.fit(df)
    val result = w2vModel.transform(df)
{code}
Currently it throws {code}java.lang.IllegalArgumentException: requirement 
failed: Column tokens must be of type ArrayType(StringType,true) but was 
actually ArrayType(StringType,false).{code}

And it seems {code}udf { sentence: Seq[String] =>  ...{code} is only compatible 
with {code} new ArrayType(StringType, true){code}. I tried to change input type 
of Word2Vec to  new ArrayType(StringType, false), and the code above reports 
{code}Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot 
resolve 'UDF(tokens)' due to data type mismatch: argument 1 is expected to be 
of type array<string>, however, 'tokens' is of type array<string>.;{code} in 
current master code (1.4 don't report it)



> Change output type of Tokenizer to Array(String, true)
> ------------------------------------------------------
>
>                 Key: SPARK-9062
>                 URL: https://issues.apache.org/jira/browse/SPARK-9062
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>            Reporter: yuhao yang
>            Priority: Minor
>
> Currently output type of Tokenizer is Array(String, false), which is not 
> compatible with Word2Vec and Other transformers since their input type is 
> Array(String, true). Seq[String] in udf will be treated as Array(String, 
> true) by default. 
> I'm also thinking for Nullable columns, maybe tokenizer should return 
> Array(null) for null value in the input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to