Github user aborsu985 commented on the pull request:

    https://github.com/apache/spark/pull/4504#issuecomment-82969443
  
    Thank you for the tip, I'll look into the java tests next week when I have 
some time.
    But in the meantime. I changed the RegexTokenizer to extend from Tokenizer 
instead of UnaryTransformer. This seemed like a good idea at the time as I 
could then test the two tokenizers with one function. But this seems to cause 
some problems in java. I'm looking into it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to