[jira] [Commented] (SPARK-15243) Binarizer.explainParam(u"...") raises ValueError

2016-05-10 Thread Kazuki Yokoishi (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279299#comment-15279299
 ] 

Kazuki Yokoishi commented on SPARK-15243:
-

Thank you for handling this issue.
But same problems caused by isinstance(obj, str) seem to still remain in 
dataframe.py and types.py.

$ grep -r "isinstance(.*, str)" python/pyspark/
python/pyspark/ml/param/__init__.py:if isinstance(paramName, str):
python/pyspark/ml/param/__init__.py:elif isinstance(param, str):
python/pyspark/sql/dataframe.py:if not isinstance(col, str):
python/pyspark/sql/dataframe.py:if not isinstance(col, str):
python/pyspark/sql/dataframe.py:if not isinstance(col1, str):
python/pyspark/sql/dataframe.py:if not isinstance(col2, str):
python/pyspark/sql/dataframe.py:if not isinstance(col1, str):
python/pyspark/sql/dataframe.py:if not isinstance(col2, str):
python/pyspark/sql/dataframe.py:if not isinstance(col1, str):
python/pyspark/sql/dataframe.py:if not isinstance(col2, str):
python/pyspark/sql/types.py:if not isinstance(name, str):
python/pyspark/sql/types.py:if isinstance(field, str) and data_type 
is None:
python/pyspark/sql/types.py:if isinstance(data_type, str):
python/pyspark/sql/types.py:if isinstance(key, str):

> Binarizer.explainParam(u"...") raises ValueError
> 
>
> Key: SPARK-15243
> URL: https://issues.apache.org/jira/browse/SPARK-15243
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark
>Affects Versions: 2.0.0
> Environment: CentOS 7, Spark 1.6.0
>Reporter: Kazuki Yokoishi
>Priority: Minor
>
> When unicode is passed to Binarizer.explainParam(), ValueError occurs.
> To reproduce:
> {noformat}
> >>> binarizer = Binarizer(threshold=1.0, inputCol="values", 
> >>> outputCol="features")
> >>> binarizer.explainParam("threshold") # str can be passed
> 'threshold: threshold in binary classification prediction, in range [0, 1] 
> (default: 0.0, current: 1.0)'
> >>> binarizer.explainParam(u"threshold") # unicode cannot be passed
> ---
> ValueErrorTraceback (most recent call last)
>  in ()
> > 1 binarizer.explainParam(u"threshold")
> /usr/spark/current/python/pyspark/ml/param/__init__.py in explainParam(self, 
> param)
>  96 default value and user-supplied value in a string.
>  97 """
> ---> 98 param = self._resolveParam(param)
>  99 values = []
> 100 if self.isDefined(param):
> /usr/spark/current/python/pyspark/ml/param/__init__.py in _resolveParam(self, 
> param)
> 231 return self.getParam(param)
> 232 else:
> --> 233 raise ValueError("Cannot resolve %r as a param." % param)
> 234 
> 235 @staticmethod
> ValueError: Cannot resolve u'threshold' as a param.
> {noformat}
> Same erros occur in other methods.
> * Binarizer.hasDefault()
> * Binarizer.getOrDefault()
> * Binarizer.isSet()
> These errors are caused by checks *isinstance(obj, str)* in 
> pyspark.ml.param.Params._resolveParam().
> basestring should be used instead of str in isinstance() for backward 
> compatibility as below.
> {noformat}
> if sys.version >= '3':
>  basestring = str
> if isinstance(obj, basestring):
> # TODO
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-15244) Type of column name created with sqlContext.createDataFrame() is not consistent.

2016-05-09 Thread Kazuki Yokoishi (JIRA)
Kazuki Yokoishi created SPARK-15244:
---

 Summary: Type of column name created with 
sqlContext.createDataFrame() is not consistent.
 Key: SPARK-15244
 URL: https://issues.apache.org/jira/browse/SPARK-15244
 Project: Spark
  Issue Type: Bug
  Components: PySpark
Affects Versions: 2.0.0
 Environment: CentOS 7, Spark 1.6.0
Reporter: Kazuki Yokoishi
Priority: Minor


StructField() converts field name to str in __init__.
But, when list of str/unicode is passed to sqlContext.createDataFrame() as a 
schema, the type of StructField.name is not converted.

To reproduce:
{noformat}
>>> schema = StructType([StructField(u"col", StringType())])
>>> df1 = sqlContext.createDataFrame([("a",)], schema)
>>> df1.columns # "col" is str
['col']
>>> df2 = sqlContext.createDataFrame([("a",)], [u"col"])
>>> df2.columns # "col" is unicode
[u'col']
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-15243) Binarizer.explainParam(u"...") raises ValueError

2016-05-09 Thread Kazuki Yokoishi (JIRA)
Kazuki Yokoishi created SPARK-15243:
---

 Summary: Binarizer.explainParam(u"...") raises ValueError
 Key: SPARK-15243
 URL: https://issues.apache.org/jira/browse/SPARK-15243
 Project: Spark
  Issue Type: Bug
  Components: PySpark
Affects Versions: 2.0.0
 Environment: CentOS 7, Spark 1.6.0
Reporter: Kazuki Yokoishi
Priority: Minor


When unicode is passed to Binarizer.explainParam(), ValueError occurs.

To reproduce:
{noformat}
>>> binarizer = Binarizer(threshold=1.0, inputCol="values", 
>>> outputCol="features")
>>> binarizer.explainParam("threshold") # str can be passed
'threshold: threshold in binary classification prediction, in range [0, 1] 
(default: 0.0, current: 1.0)'

>>> binarizer.explainParam(u"threshold") # unicode cannot be passed
---
ValueErrorTraceback (most recent call last)
 in ()
> 1 binarizer.explainParam(u"threshold")

/usr/spark/current/python/pyspark/ml/param/__init__.py in explainParam(self, 
param)
 96 default value and user-supplied value in a string.
 97 """
---> 98 param = self._resolveParam(param)
 99 values = []
100 if self.isDefined(param):

/usr/spark/current/python/pyspark/ml/param/__init__.py in _resolveParam(self, 
param)
231 return self.getParam(param)
232 else:
--> 233 raise ValueError("Cannot resolve %r as a param." % param)
234 
235 @staticmethod

ValueError: Cannot resolve u'threshold' as a param.
{noformat}

Same erros occur in other methods.
* Binarizer.hasDefault()
* Binarizer.getOrDefault()
* Binarizer.isSet()

These errors are caused by checks *isinstance(obj, str)* in 
pyspark.ml.param.Params._resolveParam().

basestring should be used instead of str in isinstance() for backward 
compatibility as below.
{noformat}
if sys.version >= '3':
 basestring = str

if isinstance(obj, basestring):
# TODO
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org