Github user chouqin commented on the pull request:

    https://github.com/apache/spark/pull/2595#issuecomment-57497897
  
    @mengxr I also found it is early terminated in some cases, but it only 
occured  in pyspark, use the same data and strategy in scala will get correct 
result. Here is the models trained for pyspark and scala(using data in 
python/pyspark/mllib/tests.py: line 125):
    
    PySpark:
    ```
    DecisionTreeModel classifier of depth 1 with 3 nodes
      If (feature 0 in {1.0})
       Predict: 0.0
      Else (feature 0 not in {1.0})
       Predict: 1.0
    ```
    
    Scala:
    ```
    DecisionTreeModel classifier of depth 1 with 3 nodes
      If (feature 0 in {1.0})
       Predict: 0.0
      Else (feature 0 not in {1.0})
       Predict: 1.0
    ```
    
    I think it is most likely that pyspark calls `DecisionTree.train` with 
wrong arguments,
    I have checked all the parameters except input rdd, input rdd may get 
changed after serialization and deserialization, I am not very sure right now.
    
    Is there a good way to debug pyspark code which calls spark's API, can we 
get the log message of Scala code?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to