[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306772#comment-17306772
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 5:02 AM:
---

[~dongjoon], I have tried with Spar 3.1.1 also and found there is difference 
during creation of Parsed Logical Plan

 

Spar 3.1.1 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == 'CreateViewStatement [temp1_33], select 20E2, 
'Project [unresolvedalias(2000.0, None)], false, false, LocalTempView{code}
Spar 2.4 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == CreateViewCommand `temp1_33`, select 20E2, false, 
false, LocalTempView    +- 'Project [unresolvedalias(2.0E+3, None)]       +- 
OneRowRelation {code}


was (Author: ankitraj):
[~dongjoon], I have tried with Spar 3.1.1 also and found there difference 
during creation of Parsed Logical Plan

 

Spar 3.1.1 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == 'CreateViewStatement [temp1_33], select 20E2, 
'Project [unresolvedalias(2000.0, None)], false, false, LocalTempView{code}
Spar 2.4 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == CreateViewCommand `temp1_33`, select 20E2, false, 
false, LocalTempView    +- 'Project [unresolvedalias(2.0E+3, None)]       +- 
OneRowRelation {code}

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306772#comment-17306772
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 4:54 AM:
---

[~dongjoon], I have tried with Spar 3.1.1 also and found there difference 
during creation of Parsed Logical Plan

 

Spar 3.1.1 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == 'CreateViewStatement [temp1_33], select 20E2, 
'Project [unresolvedalias(2000.0, None)], false, false, LocalTempView{code}
Spar 2.4 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == CreateViewCommand `temp1_33`, select 20E2, false, 
false, LocalTempView    +- 'Project [unresolvedalias(2.0E+3, None)]       +- 
OneRowRelation {code}


was (Author: ankitraj):
[~dongjoon], I have tried with Spar 3.1.1 also and found there difference 
during creation of Parsed Logical Plan

 

Spar 3.1.1 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == 'CreateViewStatement [temp1_33], select 20E2, 
'Project [unresolvedalias(2000.0, None)], false, false, LocalTempView{code}
Spar 2.4 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == CreateViewCommand `temp1_33`, select 20E2, false, 
false, LocalTempView    +- 'Project [unresolvedalias(2.0E+3, None)]       +- 
OneRowRelation {code}

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306772#comment-17306772
 ] 

Ankit Raj Boudh commented on SPARK-34673:
-

[~dongjoon], I have tried with Spar 3.1.1 also and found there difference 
during creation of Parsed Logical Plan

 

Spar 3.1.1 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == 'CreateViewStatement [temp1_33], select 20E2, 
'Project [unresolvedalias(2000.0, None)], false, false, LocalTempView{code}
Spar 2.4 Parsed Logical Plan: 
{code:java}
== Parsed Logical Plan == CreateViewCommand `temp1_33`, select 20E2, false, 
false, LocalTempView    +- 'Project [unresolvedalias(2.0E+3, None)]       +- 
OneRowRelation {code}

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 4:50 AM:
---

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

 
{code:java}
== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select 20E2, 'Project 
[unresolvedalias(2000.0, None)], false, false, LocalTempView
== Analyzed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
 
{code}
 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

 
{code:java}
== Parsed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(2.0E+3, None)]
      +- OneRowRelation
 
== Analyzed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(2.0E+3, None)]
      +- OneRowRelation
 
== Optimized Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(2.0E+3, None)]
      +- OneRowRelation
 
== Physical Plan ==
Execute CreateViewCommand
   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
         +- 'Project [unresolvedalias(2.0E+3, None)]
            +- OneRowRelation
{code}
 

 

 


was (Author: ankitraj):
I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

 
{code:java}
== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select 20E2, 'Project 
[unresolvedalias(2000.0, None)], false, false, LocalTempView
== Analyzed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
 
{code}
 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

 
{code:java}
== Parsed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(*2.0E+3*, None)]
      +- OneRowRelation
 
== Analyzed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(2.0E+3, None)]
      +- OneRowRelation
 
== Optimized Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(2.0E+3, None)]
      +- OneRowRelation
 
== Physical Plan ==
Execute CreateViewCommand
   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
         +- 'Project [unresolvedalias(2.0E+3, None)]
            +- OneRowRelation
{code}
 

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 4:45 AM:
---

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

 
{code:java}
== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select 20E2, 'Project 
[unresolvedalias(2000.0, None)], false, false, LocalTempView
== Analyzed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation
 
{code}
 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

 
{code:java}
== Parsed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(*2.0E+3*, None)]
      +- OneRowRelation
 
== Analyzed Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(2.0E+3, None)]
      +- OneRowRelation
 
== Optimized Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
   +- 'Project [unresolvedalias(2.0E+3, None)]
      +- OneRowRelation
 
== Physical Plan ==
Execute CreateViewCommand
   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
         +- 'Project [unresolvedalias(2.0E+3, None)]
            +- OneRowRelation
{code}
 

 

 


was (Author: ankitraj):
I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(2000.0, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `temp1_33`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 4:44 AM:
---

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(2000.0, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `temp1_33`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 


was (Author: ankitraj):
I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `temp1_33`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-

[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 4:43 AM:
---

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `temp1_33`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 


was (Author: ankitraj):
I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `temp1_33`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-

[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 2:38 AM:
---

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `temp1_33`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 


was (Author: ankitraj):
I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `kajal_1`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `kajal_1`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `kajal_1`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `kajal_1`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To 

[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 2:36 AM:
---

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*:

== Parsed Logical Plan ==
 'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
 CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
 Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*:

 

== Parsed Logical Plan ==

CreateViewCommand `kajal_1`, select *20E2*, false, false, LocalTempView

   +- 'Project [unresolvedalias(*2.0E+3*, None)]

      +- OneRowRelation

 

== Analyzed Logical Plan ==

CreateViewCommand `kajal_1`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Optimized Logical Plan ==

CreateViewCommand `kajal_1`, select 20E2, false, false, LocalTempView

   +- 'Project [unresolvedalias(2.0E+3, None)]

      +- OneRowRelation

 

== Physical Plan ==

Execute CreateViewCommand

   +- CreateViewCommand `kajal_1`, select 20E2, false, false, LocalTempView

         +- 'Project [unresolvedalias(2.0E+3, None)]

            +- OneRowRelation

 

 


was (Author: ankitraj):
I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*;):

== Parsed Logical Plan ==
'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*;):

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/23/21, 2:35 AM:
---

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

Spark-master branch plans for this query (create temporary view temp1_33 as 
select *20E2*;):

== Parsed Logical Plan ==
'CreateViewStatement [temp1_33], select *20E2*, 'Project 
[unresolvedalias(*2000.0*, None)], false, false, LocalTempView

== Analyzed Logical Plan ==

CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Optimized Logical Plan ==
CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

== Physical Plan ==
Execute CreateViewCommand
 +- CreateViewCommand `temp1_33`, select 20E2, false, false, LocalTempView
 +- 'Project [unresolvedalias(2000.0, None)]
 +- OneRowRelation

 

 

Spark-2.4 branch plans for this query (create temporary view temp1_33 as select 
*20E2*;):

 

 


was (Author: ankitraj):
I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306703#comment-17306703
 ] 

Ankit Raj Boudh commented on SPARK-34673:
-

I can see the difference b/w plan which is creating in spark master branch and 
spark branch-2.4

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-22 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17305913#comment-17305913
 ] 

Ankit Raj Boudh commented on SPARK-34673:
-

{code:java}
Caused by: java.lang.IllegalArgumentException: Error: name expected at the 
position 10 of 'decimal(2,-2)' but '-' is found. at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:354)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:331)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:379)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109)
 at org.apache.hive.service.cli.TypeDescriptor.(TypeDescriptor.java:57) 
at 
org.apache.hive.service.cli.ColumnDescriptor.(ColumnDescriptor.java:53) 
at org.apache.hive.service.cli.TableSchema.(TableSchema.java:52) at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$.getTableSchema(SparkExecuteStatementOperation.scala:300)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.resultSchema$lzycompute(SparkExecuteStatementOperation.scala:68)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.resultSchema(SparkExecuteStatementOperation.scala:63)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.getResultSetSchema(SparkExecuteStatementOperation.scala:155)
 at 
org.apache.hive.service.cli.operation.OperationManager.getOperationResultSetSchema(OperationManager.java:209)
 at 
org.apache.hive.service.cli.session.HiveSessionImpl.getResultSetMetadata(HiveSessionImpl.java:773)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
 ... 18 more{code}

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,c

2021-03-19 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-34673:

Comment: was deleted

(was: btw this issue is coming due to hive serde : 
{code:java}
Caused by: java.lang.IllegalArgumentException: Error: name expected at the 
position 10 of 'decimal(2,-2)' but '-' is found.
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:354)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:331)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:379)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109)
 at org.apache.hive.service.cli.TypeDescriptor.(TypeDescriptor.java:57)
 at 
org.apache.hive.service.cli.ColumnDescriptor.(ColumnDescriptor.java:53)
 at org.apache.hive.service.cli.TableSchema.(TableSchema.java:52)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$.getTableSchema(SparkExecuteStatementOperation.scala:300)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.resultSchema$lzycompute(SparkExecuteStatementOperation.scala:68)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.resultSchema(SparkExecuteStatementOperation.scala:63)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.getResultSetSchema(SparkExecuteStatementOperation.scala:155)
 at 
org.apache.hive.service.cli.operation.OperationManager.getOperationResultSetSchema(OperationManager.java:209)
 at 
org.apache.hive.service.cli.session.HiveSessionImpl.getResultSetMetadata(HiveSessionImpl.java:773)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
 ... 18 more{code})

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304760#comment-17304760
 ] 

Ankit Raj Boudh commented on SPARK-34673:
-

btw this issue is coming due to hive serde : 
{code:java}
Caused by: java.lang.IllegalArgumentException: Error: name expected at the 
position 10 of 'decimal(2,-2)' but '-' is found.
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:354)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:331)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:379)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136)
 at 
org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109)
 at org.apache.hive.service.cli.TypeDescriptor.(TypeDescriptor.java:57)
 at 
org.apache.hive.service.cli.ColumnDescriptor.(ColumnDescriptor.java:53)
 at org.apache.hive.service.cli.TableSchema.(TableSchema.java:52)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$.getTableSchema(SparkExecuteStatementOperation.scala:300)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.resultSchema$lzycompute(SparkExecuteStatementOperation.scala:68)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.resultSchema(SparkExecuteStatementOperation.scala:63)
 at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.getResultSetSchema(SparkExecuteStatementOperation.scala:155)
 at 
org.apache.hive.service.cli.operation.OperationManager.getOperationResultSetSchema(OperationManager.java:209)
 at 
org.apache.hive.service.cli.session.HiveSessionImpl.getResultSetMetadata(HiveSessionImpl.java:773)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
 ... 18 more{code}

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304736#comment-17304736
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/19/21, 9:10 AM:
---

 [~dongjoon] and [~hyukjin.kwon] , I have checked this case in sparl-sql and i 
found we are allow to create temporary view with Negative scale *decimal(2, 
-2)*  and able to query the data from temporary table.

!Screenshot 2021-03-19 at 1.33.54 PM.png!

I think we should not allow to create view with negative scale *decimal(2, -2)*.

I have checked master branch behaviour and found during creation for temporary 
view data_type is showing *double*.


was (Author: ankitraj):
 [~dongjoon] and [~hyukjin.kwon] , I have checked this case in sparl-sql and i 
found we are allow to create temporary view with Negative scale *decimal(2, 
-2)*  and able to query the data from temporary table.

!Screenshot 2021-03-19 at 1.33.54 PM.png!

I think it's wrong.

I have checked master branch behaviour and found during creation for temporary 
view data_type is showing *double*.

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304736#comment-17304736
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/19/21, 9:05 AM:
---

 [~dongjoon] and [~hyukjin.kwon] , I have checked this case in sparl-sql and i 
found we are allow to create temporary view with Negative scale *decimal(2, 
-2)*  and able to query the data from temporary table.

!Screenshot 2021-03-19 at 1.33.54 PM.png!

I think it's wrong.

I have checked master branch behaviour and found during creation for temporary 
view data_type is showing *double*.


was (Author: ankitraj):
[~hyukjin.kwon] , I have checked this case in sparl-sql and i found we are 
allow to create temporary view with Negative scale *decimal(2, -2)*  and able 
to query the data from temporary table.

!Screenshot 2021-03-19 at 1.33.54 PM.png!

I think it's wrong.

I have checked master branch behaviour and found during creation for temporary 
view data_type is showing *double*.

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304736#comment-17304736
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/19/21, 9:03 AM:
---

[~hyukjin.kwon] , I have checked this case in sparl-sql and i found we are 
allow to create temporary view with Negative scale *decimal(2, -2)*  and able 
to query the data from temporary table.

!Screenshot 2021-03-19 at 1.33.54 PM.png!

I think it's wrong.

I have checked master branch behaviour and found during creation for temporary 
view data_type is showing *double*.


was (Author: ankitraj):
[~hyukjin.kwon] , I have checked this case in sparl-sql and i found we are 
allow to create temporary view with Negative scale *decimal(2, -2)*  and able 
to query the data from temporary table.

!Screenshot 2021-03-19 at 1.33.54 PM.png!

I think it's wrong.

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304736#comment-17304736
 ] 

Ankit Raj Boudh commented on SPARK-34673:
-

[~hyukjin.kwon] , I have checked this case in sparl-sql and i found we are 
allow to create temporary view with Negative scale *decimal(2, -2)*  and able 
to query the data from temporary table.

!Screenshot 2021-03-19 at 1.33.54 PM.png!

I think it's wrong.

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-19 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-34673:

Attachment: Screenshot 2021-03-19 at 1.33.54 PM.png

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png, Screenshot 
> 2021-03-19 at 1.33.54 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-10 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298896#comment-17298896
 ] 

Ankit Raj Boudh commented on SPARK-34673:
-

[~hyukjin.kwon], Here is the testing scenario 

this is in hive beeline mode 

 
{code:java}
1.  create temporary view kajal1 as select * from values ("t2", 20E2) as t2(t2, 
t2i);
2.  select * from kajal1;
{code}
!Screenshot 2021-03-10 at 8.47.00 PM.png!

 

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-10 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298896#comment-17298896
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/10/21, 3:18 PM:
---

[~hyukjin.kwon], Here is the testing scenario 

this is in hive beeline mode in spark branch-2.4

 
{code:java}
1.  create temporary view kajal1 as select * from values ("t2", 20E2) as t2(t2, 
t2i);
2.  select * from kajal1;
{code}
!Screenshot 2021-03-10 at 8.47.00 PM.png!

 

 

 


was (Author: ankitraj):
[~hyukjin.kwon], Here is the testing scenario 

this is in hive beeline mode 

 
{code:java}
1.  create temporary view kajal1 as select * from values ("t2", 20E2) as t2(t2, 
t2i);
2.  select * from kajal1;
{code}
!Screenshot 2021-03-10 at 8.47.00 PM.png!

 

 

 

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-10 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-34673:

Attachment: Screenshot 2021-03-10 at 8.47.00 PM.png

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
> Attachments: Screenshot 2021-03-10 at 8.47.00 PM.png
>
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-10 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298819#comment-17298819
 ] 

Ankit Raj Boudh edited comment on SPARK-34673 at 3/10/21, 1:18 PM:
---

[~hyukjin.kwon], i will raise pr for this soon.


was (Author: ankitraj):
[~hyukjin.kwon], i will raise pr for this.

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-34673) Select queries fail with Error: java.lang.IllegalArgumentException: Error: name expected at the position 10 of 'decimal(2,-2)' but '-' is found. (state=,code=0)

2021-03-10 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17298819#comment-17298819
 ] 

Ankit Raj Boudh commented on SPARK-34673:
-

[~hyukjin.kwon], i will raise pr for this.

> Select queries fail  with Error: java.lang.IllegalArgumentException: Error: 
> name expected at the position 10 of 'decimal(2,-2)' but '-' is found. 
> (state=,code=0)
> -
>
> Key: SPARK-34673
> URL: https://issues.apache.org/jira/browse/SPARK-34673
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.5
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Minor
>
> Temporary views are created
> Select filter queries are executed on the Temporary views.
>  
> [Actual Issue] : - Select queries fail with Error: 
> java.lang.IllegalArgumentException: Error: name expected at the position 10 
> of 'decimal(2,-2)' but '-' is found. (state=,code=0)
>  
> [Expected Result] :- Select queries should be success on Temporary views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32306) `approx_percentile` in Spark SQL gives incorrect results

2020-07-15 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158863#comment-17158863
 ] 

Ankit Raj Boudh commented on SPARK-32306:
-

[~seanmalory], i will raise the pr for this soon

> `approx_percentile` in Spark SQL gives incorrect results
> 
>
> Key: SPARK-32306
> URL: https://issues.apache.org/jira/browse/SPARK-32306
> Project: Spark
>  Issue Type: Bug
>  Components: PySpark, SQL
>Affects Versions: 2.4.4
>Reporter: Sean Malory
>Priority: Major
>
> The `approx_percentile` function in Spark SQL does not give the correct 
> result. I'm not sure how incorrect it is; it may just be a boundary issue. 
> From the docs:
> {quote}The accuracy parameter (default: 1) is a positive numeric literal 
> which controls approximation accuracy at the cost of memory. Higher value of 
> accuracy yields better accuracy, 1.0/accuracy is the relative error of the 
> approximation.
> {quote}
> This is not true. Here is a minimum example in `pyspark` where, essentially, 
> the median of 5 and 8 is being calculated as 5:
> {code:python}
> import pyspark.sql.functions as psf
> df = spark.createDataFrame(
> [('bar', 5), ('bar', 8)], ['name', 'val']
> )
> median = psf.expr('percentile_approx(val, 0.5, 2147483647)')
> df.groupBy('name').agg(median.alias('median'))# gives the median as 5
> {code}
> I've tested this with Spark v2.4.4, pyspark v2.4.5- although I suspect this 
> is an issue with the underlying algorithm.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-32281) Spark wipes out SORTED spec in metastore when DESC is used

2020-07-15 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-32281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17158125#comment-17158125
 ] 

Ankit Raj Boudh commented on SPARK-32281:
-

[~bersprockets], I will raise PR for this soon.

> Spark wipes out SORTED spec in metastore when DESC is used
> --
>
> Key: SPARK-32281
> URL: https://issues.apache.org/jira/browse/SPARK-32281
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Bruce Robbins
>Priority: Major
>
> When altering a Hive bucketed table or updating its statistics, Spark will 
> wipe out the SORTED specification in the metastore if the specification uses 
> DESC.
>  For example:
> {noformat}
> 0: jdbc:hive2://localhost:1> -- in beeline
> 0: jdbc:hive2://localhost:1> create table bucketed (a int, b int, c int, 
> d int) clustered by (c) sorted by (c asc, d desc) into 10 buckets;
> No rows affected (0.045 seconds)
> 0: jdbc:hive2://localhost:1> show create table bucketed;
> ++
> |   createtab_stmt   |
> ++
> | CREATE TABLE `bucketed`(   |
> |   `a` int, |
> |   `b` int, |
> |   `c` int, |
> |   `d` int) |
> | CLUSTERED BY ( |
> |   c)   |
> | SORTED BY (|
> |   c ASC,   |
> |   d DESC)  |
> | INTO 10 BUCKETS|
> | ROW FORMAT SERDE   |
> |   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
> | STORED AS INPUTFORMAT  |
> |   'org.apache.hadoop.mapred.TextInputFormat'   |
> | OUTPUTFORMAT   |
> |   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION   |
> |   'file:/Users/bruce/hadoop/apache-hive-2.3.7-bin/warehouse/bucketed' |
> | TBLPROPERTIES (|
> |   'transient_lastDdlTime'='1594488043')|
> ++
> 21 rows selected (0.042 seconds)
> 0: jdbc:hive2://localhost:1> 
> -
> -
> -
> scala> // in spark
> scala> sql("alter table bucketed set tblproperties ('foo'='bar')")
> 20/07/11 10:21:36 WARN HiveConf: HiveConf of name hive.metastore.local does 
> not exist
> 20/07/11 10:21:38 WARN SessionState: METASTORE_FILTER_HOOK will be ignored, 
> since hive.security.authorization.manager is set to instance of 
> HiveAuthorizerFactory.
> res0: org.apache.spark.sql.DataFrame = []
> scala> 
> -
> -
> -
> 0: jdbc:hive2://localhost:1> -- back in beeline
> 0: jdbc:hive2://localhost:1> show create table bucketed;
> ++
> |   createtab_stmt   |
> ++
> | CREATE TABLE `bucketed`(   |
> |   `a` int, |
> |   `b` int, |
> |   `c` int, |
> |   `d` int) |
> | CLUSTERED BY ( |
> |   c)   |
> | INTO 10 BUCKETS|
> | ROW FORMAT SERDE   |
> |   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'  |
> | STORED AS INPUTFORMAT  |
> |   'org.apache.hadoop.mapred.TextInputFormat'   |
> | OUTPUTFORMAT   |
> |   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION   |
> |   'file:/Users/bruce/hadoop/apache-hive-2.3.7-bin/warehouse/bucketed' |
> | TBLPROPERTIES (|
> |   'foo'='bar', |
> |   'spark.sql.partitionProvider'='catalog', |
> |   'transient_lastDdlTime'='1594488098')|
> ++
> 20 rows selected (0.038 seconds)
> 0: jdbc:hive2://localhost:1> 
> {noformat}
> Note that the SORTED specification disappears.
> Another example, this time using insert:
> {noformat}
> 0: jdbc:hive2://localhost:1> -- in beeline
> 0: jdbc:hive2://localhost:1> create table bucketed (a int, b int, c int, 

[jira] [Issue Comment Deleted] (SPARK-31622) Test-jar in the Spark distribution

2020-05-10 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-31622:

Comment: was deleted

(was: [~hyukjin.kwon] , ok I will check and will update you.)

> Test-jar in the Spark distribution
> --
>
> Key: SPARK-31622
> URL: https://issues.apache.org/jira/browse/SPARK-31622
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Arseniy Tashoyan
>Priority: Minor
>
> The jar with classifier *tests* is delivered in the Spark distribution:
> {code:java}
> ls -1 spark-3.0.0-preview2-bin-hadoop2.7/jars/ | grep tests
> spark-tags_2.12-3.0.0-preview2-tests.jar
> {code}
> Normally, test-jars should not be used for production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31622) Test-jar in the Spark distribution

2020-05-10 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104069#comment-17104069
 ] 

Ankit Raj Boudh commented on SPARK-31622:
-

[~hyukjin.kwon] , ok I will check and will update you.

> Test-jar in the Spark distribution
> --
>
> Key: SPARK-31622
> URL: https://issues.apache.org/jira/browse/SPARK-31622
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Arseniy Tashoyan
>Priority: Minor
>
> The jar with classifier *tests* is delivered in the Spark distribution:
> {code:java}
> ls -1 spark-3.0.0-preview2-bin-hadoop2.7/jars/ | grep tests
> spark-tags_2.12-3.0.0-preview2-tests.jar
> {code}
> Normally, test-jars should not be used for production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31622) Test-jar in the Spark distribution

2020-05-10 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17104068#comment-17104068
 ] 

Ankit Raj Boudh commented on SPARK-31622:
-

[~hyukjin.kwon] , ok I will check and will update you 

> Test-jar in the Spark distribution
> --
>
> Key: SPARK-31622
> URL: https://issues.apache.org/jira/browse/SPARK-31622
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Arseniy Tashoyan
>Priority: Minor
>
> The jar with classifier *tests* is delivered in the Spark distribution:
> {code:java}
> ls -1 spark-3.0.0-preview2-bin-hadoop2.7/jars/ | grep tests
> spark-tags_2.12-3.0.0-preview2-tests.jar
> {code}
> Normally, test-jars should not be used for production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31622) Test-jar in the Spark distribution

2020-05-08 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102466#comment-17102466
 ] 

Ankit Raj Boudh commented on SPARK-31622:
-

[~hyukjin.kwon], please confirm this then i will raise PR for this.

> Test-jar in the Spark distribution
> --
>
> Key: SPARK-31622
> URL: https://issues.apache.org/jira/browse/SPARK-31622
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Arseniy Tashoyan
>Priority: Minor
>
> The jar with classifier *tests* is delivered in the Spark distribution:
> {code:java}
> ls -1 spark-3.0.0-preview2-bin-hadoop2.7/jars/ | grep tests
> spark-tags_2.12-3.0.0-preview2-tests.jar
> {code}
> Normally, test-jars should not be used for production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31654) sequence producing inconsistent intervals for month step

2020-05-08 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102421#comment-17102421
 ] 

Ankit Raj Boudh commented on SPARK-31654:
-

[~roman_y], I will raise pr for this.

> sequence producing inconsistent intervals for month step
> 
>
> Key: SPARK-31654
> URL: https://issues.apache.org/jira/browse/SPARK-31654
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Roman Yalki
>Priority: Major
>
> Taking an example from [https://spark.apache.org/docs/latest/api/sql/]
> {code:java}
> > SELECT sequence(to_date('2018-01-01'), to_date('2018-03-01'), interval 1 
> > month);{code}
> [2018-01-01,2018-02-01,2018-03-01]
> if one is to expand `stop` till the end of the year some intervals are 
> returned as the last day of the month whereas first day of the month is 
> expected
> {code:java}
> > SELECT sequence(to_date('2018-01-01'), to_date('2019-01-01'), interval 1 
> > month){code}
> [2018-01-01, 2018-02-01, 2018-03-01, *2018-03-31, 2018-04-30, 2018-05-31, 
> 2018-06-30, 2018-07-31, 2018-08-31, 2018-09-30, 2018-10-31*, 2018-12-01, 
> 2019-01-01]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31657) CSV Writer writes no header for empty DataFrames

2020-05-08 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102420#comment-17102420
 ] 

Ankit Raj Boudh commented on SPARK-31657:
-

[~fpin], I will raise PR for this

> CSV Writer writes no header for empty DataFrames
> 
>
> Key: SPARK-31657
> URL: https://issues.apache.org/jira/browse/SPARK-31657
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.4.1
> Environment: Local pyspark 2.41
>Reporter: Furcy Pin
>Priority: Minor
>
> When writing a DataFrame as csv with the Header option set to true,
> the header is not written when the DataFrame is empty.
> This creates failures for processes that read the csv back.
> Example (please notice the limit(0) in the second example):
> ```
>  
> {code:java}
> Welcome to
>   __
>  / __/__ ___ _/ /__
>  _\ \/ _ \/ _ `/ __/ '_/
>  /__ / .__/\_,_/_/ /_/\_\ version 2.4.1
>  /_/
> Using Python version 2.7.17 (default, Nov 7 2019 10:07:09)
> SparkSession available as 'spark'.
> >>> df1 = spark.sql("SELECT 1 as a")
> >>> df1.limit(1).write.mode("OVERWRITE").option("Header", 
> >>> True).csv("data/test/csv")
> >>> spark.read.option("Header", True).csv("data/test/csv").show()
> +---+
> | a|
> +---+
> | 1|
> +---+
> >>> 
> >>> df1.limit(0).write.mode("OVERWRITE").option("Header", 
> >>> True).csv("data/test/csv")
> >>> spark.read.option("Header", True).csv("data/test/csv").show()
> ++
> ||
> ++
> ++
> {code}
>  
> Expected behavior:
> {code:java}
> >>> df1.limit(0).write.mode("OVERWRITE").option("Header", 
> >>> True).csv("data/test/csv")
> >>> spark.read.option("Header", True).csv("data/test/csv").show()
> +---+
> | a|
> +---+
> +---+{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31595) Spark sql cli should allow unescaped quote mark in quoted string

2020-04-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094830#comment-17094830
 ] 

Ankit Raj Boudh commented on SPARK-31595:
-

[~adrian-wang], can i start working on this issue ?

> Spark sql cli should allow unescaped quote mark in quoted string
> 
>
> Key: SPARK-31595
> URL: https://issues.apache.org/jira/browse/SPARK-31595
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Adrian Wang
>Priority: Major
>
> spark-sql> select "'";
> spark-sql> select '"';
> In Spark parser if we pass a text of `select "'";`, there will be 
> ParserCancellationException, which will be handled by PredictionMode.LL. By 
> dropping `;` correctly we can avoid that retry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31591) namePrefix could be null in Utils.createDirectory

2020-04-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094829#comment-17094829
 ] 

Ankit Raj Boudh commented on SPARK-31591:
-

[~cltlfcjin], It's ok Thank you for raising PR :)

> namePrefix could be null in Utils.createDirectory
> -
>
> Key: SPARK-31591
> URL: https://issues.apache.org/jira/browse/SPARK-31591
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0
>Reporter: Lantao Jin
>Priority: Minor
>
> In our production, we find that many shuffle files could be located in
> /hadoop/2/yarn/local/usercache/b_carmel/appcache/application_1586487864336_4602/*null*-107d4e9c-d3c7-419e-9743-a21dc4eaeb3f/3a
> The Util.createDirectory() uses a default parameter "spark"
> {code}
>   def createDirectory(root: String, namePrefix: String = "spark"): File = {
> {code}
> But in some cases, the actual namePrefix is null. If the method is called 
> with null, then the default value would not be applied.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31586) Replace expression TimeSub(l, r) with TimeAdd(l -r)

2020-04-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094217#comment-17094217
 ] 

Ankit Raj Boudh commented on SPARK-31586:
-

Hi Kent Yao, are you working on this issue ?, if not can i start working on 
this issue.

> Replace expression TimeSub(l, r) with TimeAdd(l -r)
> ---
>
> Key: SPARK-31586
> URL: https://issues.apache.org/jira/browse/SPARK-31586
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.1.0
>Reporter: Kent Yao
>Priority: Minor
>
> The implementation of TimeSub for the operation of timestamp subtracting 
> interval is almost repetitive with TimeAdd. We can replace it with TimeAdd(l, 
> -r) since there are equivalent. 
> Suggestion from 
> https://github.com/apache/spark/pull/28310#discussion_r414259239



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31591) namePrefix could be null in Utils.createDirectory

2020-04-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094210#comment-17094210
 ] 

Ankit Raj Boudh commented on SPARK-31591:
-

Hi [~cltlfcjin] , I will start working on this issue. 

> namePrefix could be null in Utils.createDirectory
> -
>
> Key: SPARK-31591
> URL: https://issues.apache.org/jira/browse/SPARK-31591
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 3.0.0
>Reporter: Lantao Jin
>Priority: Minor
>
> In our production, we find that many shuffle files could be located in
> /hadoop/2/yarn/local/usercache/b_carmel/appcache/application_1586487864336_4602/*null*-107d4e9c-d3c7-419e-9743-a21dc4eaeb3f/3a
> The Util.createDirectory() uses a default parameter "spark"
> {code}
>   def createDirectory(root: String, namePrefix: String = "spark"): File = {
> {code}
> But in some cases, the actual namePrefix is null. If the method is called 
> with null, then the default value would not be applied.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-21 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17041727#comment-17041727
 ] 

Ankit Raj Boudh commented on SPARK-30868:
-

[~Jackey Lee], your point is correct , Thank you for raising pr :) 

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar")
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040834#comment-17040834
 ] 

Ankit Raj Boudh edited comment on SPARK-30868 at 2/20/20 10:45 AM:
---

According to me user should handle Exception case,during exception handling 
they can terminate application or can write log and continue further execution.

Anyhow you already raised a PR let them review then it will clear to us. 


was (Author: ankitraj):
According to me user should handle Exception case,during exception handling 
they can terminate application or can write log and continue further execution. 

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar")
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040834#comment-17040834
 ] 

Ankit Raj Boudh commented on SPARK-30868:
-

According to me user should handle Exception case,during exception handling 
they can terminate application or can write log and continue further execution. 

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar")
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040803#comment-17040803
 ] 

Ankit Raj Boudh edited comment on SPARK-30868 at 2/20/20 9:44 AM:
--

[~srowen] and [~cloud_fan] please give your suggestion for this jira, according 
to me current behaviour is correct.


was (Author: ankitraj):
[~srowen] and [~cloud_fan] please give your suggestion, according to me this 
behaviour is correct.

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar")
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040803#comment-17040803
 ] 

Ankit Raj Boudh edited comment on SPARK-30868 at 2/20/20 9:43 AM:
--

[~srowen] and [~cloud_fan] please give your suggestion, according to me this 
behaviour is correct.


was (Author: ankitraj):
[~srowen] and [~cloud_fan] please give your suggestion, I think this behaviour 
is ok.

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar")
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040803#comment-17040803
 ] 

Ankit Raj Boudh commented on SPARK-30868:
-

[~srowen] and [~cloud_fan] please give your suggestion, I think this behaviour 
is ok.

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar")
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17040795#comment-17040795
 ] 

Ankit Raj Boudh commented on SPARK-30868:
-

no i have not created any jira.

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar")
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-18 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039749#comment-17039749
 ] 

Ankit Raj Boudh commented on SPARK-30868:
-

[~Jackey Lee], you mean after add jar failed .show() statement also should 
throw exception ?

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar").show()
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30868) Throw Exception if runHive(sql) failed

2020-02-18 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17039694#comment-17039694
 ] 

Ankit Raj Boudh commented on SPARK-30868:
-

i am working on this

> Throw Exception if runHive(sql) failed
> --
>
> Key: SPARK-30868
> URL: https://issues.apache.org/jira/browse/SPARK-30868
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Jackey Lee
>Priority: Major
>
> At present, HiveClientImpl.runHive will not throw an exception when it runs 
> incorrectly, which will cause it to fail to feedback error information 
> normally.
> Example
> {code:scala}
> spark.sql("add jar file:///tmp/test.jar").show()
> spark.sql("show databases").show()
> {code}
> /tmp/test.jar doesn't exist, thus add jar is failed. However this code will 
> run completely without causing application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30461) details link when expanded should change the status from collapse to expand state

2020-01-08 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010744#comment-17010744
 ] 

Ankit Raj Boudh commented on SPARK-30461:
-

I will raise the pr for this 

> details link when expanded should change the status from collapse to expand  
> state
> --
>
> Key: SPARK-30461
> URL: https://issues.apache.org/jira/browse/SPARK-30461
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> When use click on +details its status should change to -details.
> This has to handle in all the Tab like SQL, Stage, Thread Dump



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30455) Select All should unselect after un-selecting any selected item from list.

2020-01-07 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010424#comment-17010424
 ] 

Ankit Raj Boudh commented on SPARK-30455:
-

i will raise for this.

> Select All should unselect after un-selecting any selected item from list.
> --
>
> Key: SPARK-30455
> URL: https://issues.apache.org/jira/browse/SPARK-30455
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.4
>Reporter: Ankit Raj Boudh
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-30455) Select All should unselect after un-selecting any selected item from list.

2020-01-07 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010424#comment-17010424
 ] 

Ankit Raj Boudh edited comment on SPARK-30455 at 1/8/20 7:06 AM:
-

i will raise pr for this.


was (Author: ankitraj):
i will raise for this.

> Select All should unselect after un-selecting any selected item from list.
> --
>
> Key: SPARK-30455
> URL: https://issues.apache.org/jira/browse/SPARK-30455
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 2.4.4
>Reporter: Ankit Raj Boudh
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30455) Select All should unselect after un-selecting any selected item from list.

2020-01-07 Thread Ankit Raj Boudh (Jira)
Ankit Raj Boudh created SPARK-30455:
---

 Summary: Select All should unselect after un-selecting any 
selected item from list.
 Key: SPARK-30455
 URL: https://issues.apache.org/jira/browse/SPARK-30455
 Project: Spark
  Issue Type: Improvement
  Components: Web UI
Affects Versions: 2.4.4
Reporter: Ankit Raj Boudh






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30168) Eliminate warnings in Parquet datasource

2020-01-04 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17008023#comment-17008023
 ] 

Ankit Raj Boudh commented on SPARK-30168:
-

[~maxgekk] , here also i am facing issue to resolve this,

if you want to work, then please continue.
Warning:Warning:line (97)java: getRowGroupOffsets() in 
org.apache.parquet.hadoop.ParquetInputSplit has been deprecated

> Eliminate warnings in Parquet datasource
> 
>
> Key: SPARK-30168
> URL: https://issues.apache.org/jira/browse/SPARK-30168
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Maxim Gekk
>Priority: Minor
>
> # 
> sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala
> {code}
> Warning:Warning:line (120)class ParquetInputSplit in package hadoop is 
> deprecated: see corresponding Javadoc for more information.
>   Option[TimeZone]) => RecordReader[Void, T]): RecordReader[Void, T] 
> = {
> Warning:Warning:line (125)class ParquetInputSplit in package hadoop is 
> deprecated: see corresponding Javadoc for more information.
>   new org.apache.parquet.hadoop.ParquetInputSplit(
> Warning:Warning:line (134)method readFooter in class ParquetFileReader is 
> deprecated: see corresponding Javadoc for more information.
>   ParquetFileReader.readFooter(conf, filePath, 
> SKIP_ROW_GROUPS).getFileMetaData
> Warning:Warning:line (183)class ParquetInputSplit in package hadoop is 
> deprecated: see corresponding Javadoc for more information.
>   split: ParquetInputSplit,
> Warning:Warning:line (212)class ParquetInputSplit in package hadoop is 
> deprecated: see corresponding Javadoc for more information.
>   split: ParquetInputSplit,
> {code}
> # 
> sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
> {code}
> Warning:Warning:line (55)java: org.apache.parquet.hadoop.ParquetInputSplit in 
> org.apache.parquet.hadoop has been deprecated
> Warning:Warning:line (95)java: 
> org.apache.parquet.hadoop.ParquetInputSplit in org.apache.parquet.hadoop has 
> been deprecated
> Warning:Warning:line (95)java: 
> org.apache.parquet.hadoop.ParquetInputSplit in org.apache.parquet.hadoop has 
> been deprecated
> Warning:Warning:line (97)java: getRowGroupOffsets() in 
> org.apache.parquet.hadoop.ParquetInputSplit has been deprecated
> Warning:Warning:line (105)java: 
> readFooter(org.apache.hadoop.conf.Configuration,org.apache.hadoop.fs.Path,org.apache.parquet.format.converter.ParquetMetadataConverter.MetadataFilter)
>  in org.apache.parquet.hadoop.ParquetFileReader has been deprecated
> Warning:Warning:line (108)java: 
> filterRowGroups(org.apache.parquet.filter2.compat.FilterCompat.Filter,java.util.List,org.apache.parquet.schema.MessageType)
>  in org.apache.parquet.filter2.compat.RowGroupFilter has been deprecated
> Warning:Warning:line (111)java: 
> readFooter(org.apache.hadoop.conf.Configuration,org.apache.hadoop.fs.Path,org.apache.parquet.format.converter.ParquetMetadataConverter.MetadataFilter)
>  in org.apache.parquet.hadoop.ParquetFileReader has been deprecated
> Warning:Warning:line (147)java: 
> ParquetFileReader(org.apache.hadoop.conf.Configuration,org.apache.parquet.hadoop.metadata.FileMetaData,org.apache.hadoop.fs.Path,java.util.List,java.util.List)
>  in org.apache.parquet.hadoop.ParquetFileReader has been deprecated
> Warning:Warning:line (203)java: 
> readFooter(org.apache.hadoop.conf.Configuration,org.apache.hadoop.fs.Path,org.apache.parquet.format.converter.ParquetMetadataConverter.MetadataFilter)
>  in org.apache.parquet.hadoop.ParquetFileReader has been deprecated
> Warning:Warning:line (226)java: 
> ParquetFileReader(org.apache.hadoop.conf.Configuration,org.apache.parquet.hadoop.metadata.FileMetaData,org.apache.hadoop.fs.Path,java.util.List,java.util.List)
>  in org.apache.parquet.hadoop.ParquetFileReader has been deprecated
> {code}
> # 
> sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetCompatibilityTest.scala
> # 
> sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetInteroperabilitySuite.scala
> # 
> sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTest.scala
> # 
> sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30172) Eliminate warnings: part3

2020-01-04 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17008019#comment-17008019
 ] 

Ankit Raj Boudh commented on SPARK-30172:
-

[~maxgekk], if you want , then please continue.

> Eliminate warnings: part3
> -
>
> Key: SPARK-30172
> URL: https://issues.apache.org/jira/browse/SPARK-30172
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> /sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala
> Warning:Warning:line (422)method initialize in class AbstractSerDe is 
> deprecated: see corresponding Javadoc for more information.
> serde.initialize(null, properties)
> /sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
> Warning:Warning:line (216)method initialize in class GenericUDTF is 
> deprecated: see corresponding Javadoc for more information.
>   protected lazy val outputInspector = 
> function.initialize(inputInspectors.toArray)
> Warning:Warning:line (342)class UDAF in package exec is deprecated: see 
> corresponding Javadoc for more information.
>   new GenericUDAFBridge(funcWrapper.createFunction[UDAF]())
> Warning:Warning:line (503)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> def serialize(buffer: AggregationBuffer): Array[Byte] = {
> Warning:Warning:line (523)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> def deserialize(bytes: Array[Byte]): AggregationBuffer = {
> Warning:Warning:line (538)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> case class HiveUDAFBuffer(buf: AggregationBuffer, canDoMerge: Boolean)
> Warning:Warning:line (538)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> case class HiveUDAFBuffer(buf: AggregationBuffer, canDoMerge: Boolean)
> /sql/hive/src/main/java/org/apache/hadoop/hive/ql/io/orc/SparkOrcNewRecordReader.java
> Warning:Warning:line (44)java: getTypes() in org.apache.orc.Reader has 
> been deprecated
> Warning:Warning:line (47)java: getTypes() in org.apache.orc.Reader has 
> been deprecated
> /sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
> Warning:Warning:line (2,368)method readFooter in class ParquetFileReader 
> is deprecated: see corresponding Javadoc for more information.
> val footer = ParquetFileReader.readFooter(
> /sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDAFSuite.scala
> Warning:Warning:line (202)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def getNewAggregationBuffer: AggregationBuffer = new 
> MockUDAFBuffer(0L, 0L)
> Warning:Warning:line (204)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def reset(agg: AggregationBuffer): Unit = {
> Warning:Warning:line (212)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def iterate(agg: AggregationBuffer, parameters: Array[AnyRef]): 
> Unit = {
> Warning:Warning:line (221)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def merge(agg: AggregationBuffer, partial: Object): Unit = {
> Warning:Warning:line (231)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def terminatePartial(agg: AggregationBuffer): AnyRef = {
> Warning:Warning:line (236)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def terminate(agg: AggregationBuffer): AnyRef = 
> terminatePartial(agg)
> Warning:Warning:line (257)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def getNewAggregationBuffer: AggregationBuffer = {
> Warning:Warning:line (266)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def reset(agg: AggregationBuffer): Unit = {
> Warning:Warning:line (277)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def iterate(agg: AggregationBuffer, parameters: 

[jira] [Issue Comment Deleted] (SPARK-30177) Eliminate warnings: part7

2020-01-04 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-30177:

Comment: was deleted

(was: I will raise PR for this)

> Eliminate warnings: part7
> -
>
> Key: SPARK-30177
> URL: https://issues.apache.org/jira/browse/SPARK-30177
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> /mllib/src/test/scala/org/apache/spark/ml/clustering/BisectingKMeansSuite.scala
> Warning:Warning:line (108)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
> assert(model.computeCost(dataset) < 0.1)
> Warning:Warning:line (135)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
> assert(model.computeCost(dataset) == summary.trainingCost)
> Warning:Warning:line (195)method computeCost in class 
> BisectingKMeansModel is deprecated (since 3.0.0): This method is deprecated 
> and will be removed in future versions. Use ClusteringEvaluator instead. You 
> can also get the cost on the training dataset in the summary.
>   model.computeCost(dataset)
> 
> /sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
> Warning:Warning:line (105)Java enum ALLOW_UNQUOTED_CONTROL_CHARS in Java 
> enum Feature is deprecated: see corresponding Javadoc for more information.
>   jsonFactory.enable(JsonParser.Feature.ALLOW_UNQUOTED_CONTROL_CHARS)
> /sql/core/src/test/java/test/org/apache/spark/sql/Java8DatasetAggregatorSuite.java
> Warning:Warning:line (28)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (37)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (46)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (55)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> Warning:Warning:line (64)java: 
> org.apache.spark.sql.expressions.javalang.typed in 
> org.apache.spark.sql.expressions.javalang has been deprecated
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
> Information:Information:java: 
> /Users/maxim/proj/eliminate-warning/sql/core/src/test/java/test/org/apache/spark/sql/JavaTestUtils.java
>  uses unchecked or unsafe operations.
> Information:Information:java: Recompile with -Xlint:unchecked for details.
> /sql/core/src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java
> Warning:Warning:line (478)java: 
> json(org.apache.spark.api.java.JavaRDD) in 
> org.apache.spark.sql.DataFrameReader has been deprecated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30172) Eliminate warnings: part3

2020-01-04 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17008014#comment-17008014
 ] 

Ankit Raj Boudh commented on SPARK-30172:
-

[~maxgekk] , sorry for late response, yes i am still working but facing a issue 
to fix this 

function.initialize(inputInspectors.toArray)
Warning:Warning:line (342)class UDAF in package exec is deprecated: see 
corresponding Javadoc for more information.

> Eliminate warnings: part3
> -
>
> Key: SPARK-30172
> URL: https://issues.apache.org/jira/browse/SPARK-30172
> Project: Spark
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> /sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformationExec.scala
> Warning:Warning:line (422)method initialize in class AbstractSerDe is 
> deprecated: see corresponding Javadoc for more information.
> serde.initialize(null, properties)
> /sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
> Warning:Warning:line (216)method initialize in class GenericUDTF is 
> deprecated: see corresponding Javadoc for more information.
>   protected lazy val outputInspector = 
> function.initialize(inputInspectors.toArray)
> Warning:Warning:line (342)class UDAF in package exec is deprecated: see 
> corresponding Javadoc for more information.
>   new GenericUDAFBridge(funcWrapper.createFunction[UDAF]())
> Warning:Warning:line (503)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> def serialize(buffer: AggregationBuffer): Array[Byte] = {
> Warning:Warning:line (523)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> def deserialize(bytes: Array[Byte]): AggregationBuffer = {
> Warning:Warning:line (538)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> case class HiveUDAFBuffer(buf: AggregationBuffer, canDoMerge: Boolean)
> Warning:Warning:line (538)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
> case class HiveUDAFBuffer(buf: AggregationBuffer, canDoMerge: Boolean)
> /sql/hive/src/main/java/org/apache/hadoop/hive/ql/io/orc/SparkOrcNewRecordReader.java
> Warning:Warning:line (44)java: getTypes() in org.apache.orc.Reader has 
> been deprecated
> Warning:Warning:line (47)java: getTypes() in org.apache.orc.Reader has 
> been deprecated
> /sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
> Warning:Warning:line (2,368)method readFooter in class ParquetFileReader 
> is deprecated: see corresponding Javadoc for more information.
> val footer = ParquetFileReader.readFooter(
> /sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDAFSuite.scala
> Warning:Warning:line (202)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def getNewAggregationBuffer: AggregationBuffer = new 
> MockUDAFBuffer(0L, 0L)
> Warning:Warning:line (204)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def reset(agg: AggregationBuffer): Unit = {
> Warning:Warning:line (212)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def iterate(agg: AggregationBuffer, parameters: Array[AnyRef]): 
> Unit = {
> Warning:Warning:line (221)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def merge(agg: AggregationBuffer, partial: Object): Unit = {
> Warning:Warning:line (231)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def terminatePartial(agg: AggregationBuffer): AnyRef = {
> Warning:Warning:line (236)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def terminate(agg: AggregationBuffer): AnyRef = 
> terminatePartial(agg)
> Warning:Warning:line (257)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def getNewAggregationBuffer: AggregationBuffer = {
> Warning:Warning:line (266)trait AggregationBuffer in class 
> GenericUDAFEvaluator is deprecated: see corresponding Javadoc for more 
> information.
>   override def reset(agg: AggregationBuffer): Unit = {
> 

[jira] [Issue Comment Deleted] (SPARK-29760) Document VALUES statement in SQL Reference.

2020-01-02 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-29760:

Comment: was deleted

(was: @Sean R. Owen , i will raise PR for this.)

> Document VALUES statement in SQL Reference.
> ---
>
> Key: SPARK-29760
> URL: https://issues.apache.org/jira/browse/SPARK-29760
> Project: Spark
>  Issue Type: Sub-task
>  Components: Documentation, SQL
>Affects Versions: 2.4.4
>Reporter: jobit mathew
>Priority: Minor
>
> spark-sql also supports *VALUES *.
> {code:java}
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three');
> 1   one
> 2   two
> 3   three
> Time taken: 0.015 seconds, Fetched 3 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') limit 2;
> 1   one
> 2   two
> Time taken: 0.014 seconds, Fetched 2 row(s)
> spark-sql>
> spark-sql> VALUES (1, 'one'), (2, 'two'), (3, 'three') order by 2;
> 1   one
> 3   three
> 2   two
> Time taken: 0.153 seconds, Fetched 3 row(s)
> spark-sql>
> {code}
> or even *values *can be used along with INSERT INTO or select.
> refer: https://www.postgresql.org/docs/current/sql-values.html 
> So please confirm VALUES also can be documented or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-30392) Documentation for the date_trunc function is incorrect

2019-12-30 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005924#comment-17005924
 ] 

Ankit Raj Boudh edited comment on SPARK-30392 at 12/31/19 4:44 AM:
---

[~avanname] , changes are already present in below branches ([SPARK-24378][SQL] 
Fix date_trunc function incorrect examples)

branch-2.3 :-

--

[https://github.com/apache/spark/blob/75cc3b2da9ee0b51ecf0f13169f2b634e36a60c4/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L1438]

branch-2.4 :-

--

[https://github.com/apache/spark/blob/db32408f2a9295e4053dd3a1ebf8ae013557ff96/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L1552]

 

but here it's not updated 
[https://spark.apache.org/docs/2.3.0/api/sql/index.html]

 

cc [~srowen] ,[@cloud-fan|https://github.com/cloud-fan] and [@HyukjinKwon , 
please check.|https://github.com/HyukjinKwon]


was (Author: ankitraj):
[~avanname] , changes are already present in below branches.

branch-2.3 :-

--

[https://github.com/apache/spark/blob/75cc3b2da9ee0b51ecf0f13169f2b634e36a60c4/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L1438]

branch-2.4 :-

--

[https://github.com/apache/spark/blob/db32408f2a9295e4053dd3a1ebf8ae013557ff96/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L1552]

 

but here it's not updated 
[https://spark.apache.org/docs/2.3.0/api/sql/index.html]

 

cc [~srowen] ,[@cloud-fan|https://github.com/cloud-fan] and [@HyukjinKwon , 
please check.|https://github.com/HyukjinKwon]

> Documentation for the date_trunc function is incorrect
> --
>
> Key: SPARK-30392
> URL: https://issues.apache.org/jira/browse/SPARK-30392
> Project: Spark
>  Issue Type: Documentation
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: Ashley
>Priority: Minor
> Attachments: Date_trunc.PNG
>
>
> The documentation for the date_trunc function includes a few sample SELECT 
> statements to show how the function works: 
> [https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc]
> In the sample SELECT statements, the inputs to the function are swapped:
> *The docs show:* SELECT date_trunc('2015-03-05T09:32:05.359', 'YEAR'); 
>  *The docs _should_ show:* SELECT date_trunc('YEAR', 
> '2015-03-05T09:32:05.359');
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30392) Documentation for the date_trunc function is incorrect

2019-12-30 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005924#comment-17005924
 ] 

Ankit Raj Boudh commented on SPARK-30392:
-

[~avanname] , changes are already present in below branches.

branch-2.3 :-

--

[https://github.com/apache/spark/blob/75cc3b2da9ee0b51ecf0f13169f2b634e36a60c4/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L1438]

branch-2.4 :-

--

[https://github.com/apache/spark/blob/db32408f2a9295e4053dd3a1ebf8ae013557ff96/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L1552]

 

but here it's not updated 
[https://spark.apache.org/docs/2.3.0/api/sql/index.html]

 

cc [~srowen] ,[@cloud-fan|https://github.com/cloud-fan] and [@HyukjinKwon , 
please check.|https://github.com/HyukjinKwon]

> Documentation for the date_trunc function is incorrect
> --
>
> Key: SPARK-30392
> URL: https://issues.apache.org/jira/browse/SPARK-30392
> Project: Spark
>  Issue Type: Documentation
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: Ashley
>Priority: Minor
> Attachments: Date_trunc.PNG
>
>
> The documentation for the date_trunc function includes a few sample SELECT 
> statements to show how the function works: 
> [https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc]
> In the sample SELECT statements, the inputs to the function are swapped:
> *The docs show:* SELECT date_trunc('2015-03-05T09:32:05.359', 'YEAR'); 
>  *The docs _should_ show:* SELECT date_trunc('YEAR', 
> '2015-03-05T09:32:05.359');
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30392) Documentation for the date_trunc function is incorrect

2019-12-30 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005901#comment-17005901
 ] 

Ankit Raj Boudh commented on SPARK-30392:
-

I will raise PR for this.

> Documentation for the date_trunc function is incorrect
> --
>
> Key: SPARK-30392
> URL: https://issues.apache.org/jira/browse/SPARK-30392
> Project: Spark
>  Issue Type: Documentation
>  Components: SQL
>Affects Versions: 2.4.3
>Reporter: Ashley
>Priority: Minor
> Attachments: Date_trunc.PNG
>
>
> The documentation for the date_trunc function includes a few sample SELECT 
> statements to show how the function works: 
> [https://spark.apache.org/docs/2.3.0/api/sql/#date_trunc]
> In the sample SELECT statements, the inputs to the function are swapped:
> *The docs show:* SELECT date_trunc('2015-03-05T09:32:05.359', 'YEAR'); 
>  *The docs _should_ show:* SELECT date_trunc('YEAR', 
> '2015-03-05T09:32:05.359');
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-30 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-29776:

Comment: was deleted

(was: [~hyukjin.kwon] , as per the discussion SPARK-29776 and SPARK-29853 both 
Jira status i updated to "Resolved" and resolution i mention like "not a 
problem" ,if it's correct then will you please assign both Jira to me and then 
we can closed both Jira.)

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30389) Add jar should not allow to add other format except jar file

2019-12-30 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005312#comment-17005312
 ] 

Ankit Raj Boudh commented on SPARK-30389:
-

i will raise PR for this

> Add jar should not allow to add other format except jar file
> 
>
> Key: SPARK-30389
> URL: https://issues.apache.org/jira/browse/SPARK-30389
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> spark-sql> add jar /opt/abhi/udf/test1jar/12.txt;
> ADD JAR /opt/abhi/udf/test1jar/12.txt
> Added [/opt/abhi/udf/test1jar/12.txt] to class path
> spark-sql> list jar;
> spark://vm1:45169/jars/12.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30384) Needs to improve the Column name and tooltips for the Fair Scheduler Pool Table

2019-12-30 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005215#comment-17005215
 ] 

Ankit Raj Boudh commented on SPARK-30384:
-

i will submit the PR.

> Needs to improve the Column name and tooltips for the Fair Scheduler Pool 
> Table 
> 
>
> Key: SPARK-30384
> URL: https://issues.apache.org/jira/browse/SPARK-30384
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> There are different columns in the Fair Scheduler Pools Table under Stage Tab.
> Issue 1: SchedulingMode should be separated as 'Scheduling Mode'
> Issue 2: Minimum Share, Pool Weight and Scheduling Mode require meaning full 
> Tool tips for the end user to understand.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29437) CSV Writer should escape 'escapechar' when it exists in the data

2019-12-29 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-29437.
-
Resolution: Not A Problem

> CSV Writer should escape 'escapechar' when it exists in the data
> 
>
> Key: SPARK-29437
> URL: https://issues.apache.org/jira/browse/SPARK-29437
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.4.3
>Reporter: Tomasz Bartczak
>Priority: Trivial
>
> When the data contains escape character (default '\') it should either be 
> escaped or quoted.
> Steps to reproduce: 
> [https://gist.github.com/kretes/58f7f66a0780681a44c175a2ac3c0da2]
>  
> The effect can be either bad data read or sometimes even unable to properly 
> read the csv, e.g. when escape character is the last character in the column 
> - it break the column reading for that row and effectively break e.g. type 
> inference for a dataframe



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30130) Hardcoded numeric values in common table expressions which utilize GROUP BY are interpreted as ordinal positions

2019-12-29 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004824#comment-17004824
 ] 

Ankit Raj Boudh commented on SPARK-30130:
-

[~hyukjin.kwon], please confirm, it's require to fix ? then i will start 
working on this jira.

> Hardcoded numeric values in common table expressions which utilize GROUP BY 
> are interpreted as ordinal positions
> 
>
> Key: SPARK-30130
> URL: https://issues.apache.org/jira/browse/SPARK-30130
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Matt Boegner
>Priority: Minor
>
> Hardcoded numeric values in common table expressions which utilize GROUP BY 
> are interpreted as ordinal positions.
> {code:java}
> val df = spark.sql("""
>  with a as (select 0 as test, count(*) group by test)
>  select * from a
>  """)
>  df.show(){code}
> This results in an error message like {color:#e01e5a}GROUP BY position 0 is 
> not in select list (valid range is [1, 2]){color} .
>  
> However, this error does not appear in a traditional subselect format. For 
> example, this query executes correctly:
> {code:java}
> val df = spark.sql("""
>  select * from (select 0 as test, count(*) group by test) a
>  """)
>  df.show(){code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30383) Remove meaning less tooltip from Executor Tab

2019-12-29 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004794#comment-17004794
 ] 

Ankit Raj Boudh commented on SPARK-30383:
-

I think not only Executor Tab but all pages need to check, i will check in all 
the pages and will submit PR today

> Remove meaning less tooltip from Executor Tab 
> --
>
> Key: SPARK-30383
> URL: https://issues.apache.org/jira/browse/SPARK-30383
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> There are tooltips display as it is Like Disk Used,  Total Tasks in Executor 
> Table under Executor Tab.
> Should improve and remove meaning less Tool Tips.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30383) Remove meaning less tooltip from Executor Tab

2019-12-29 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004792#comment-17004792
 ] 

Ankit Raj Boudh commented on SPARK-30383:
-

i will submit the PR

> Remove meaning less tooltip from Executor Tab 
> --
>
> Key: SPARK-30383
> URL: https://issues.apache.org/jira/browse/SPARK-30383
> Project: Spark
>  Issue Type: Improvement
>  Components: Web UI
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
>
> There are tooltips display as it is Like Disk Used,  Total Tasks in Executor 
> Table under Executor Tab.
> Should improve and remove meaning less Tool Tips.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-29853:

Comment: was deleted

(was: As per the discussion old behaviour not need to change.)

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004447#comment-17004447
 ] 

Ankit Raj Boudh edited comment on SPARK-29776 at 12/28/19 10:34 AM:


[~hyukjin.kwon] , as per the discussion SPARK-29776 and SPARK-29853 both Jira 
status i updated to "Resolved" and resolution i mention like "not a problem" 
,if it's correct then will you please assign both Jira to me and then we can 
closed both Jira.


was (Author: ankitraj):
[~hyukjin.kwon] , SPARK-29776 and SPARK-29853 both Jira status i updated to 
"Resolved" and resolution i mention like "not a problem" ,if it's correct then 
will you please assign both Jira to me and then we can closed both Jira.

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004447#comment-17004447
 ] 

Ankit Raj Boudh edited comment on SPARK-29776 at 12/28/19 10:21 AM:


[~hyukjin.kwon] , SPARK-29776 and SPARK-29853 both Jira status i updated to 
"Resolved" and resolution i mention like "not a problem" ,if it's correct then 
will you please assign both Jira to me and then we can closed both Jira.


was (Author: ankitraj):
[~hyukjin.kwon] , SPARK-29776 and SPARK-29853 both Jira status i updated to 
"Resolved" and resolution i mention like "not a problem" ,if it's correct then 
will you please assign both Jira to me.

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004447#comment-17004447
 ] 

Ankit Raj Boudh edited comment on SPARK-29776 at 12/28/19 10:19 AM:


[~hyukjin.kwon] , SPARK-29776 and SPARK-29853 both Jira status i updated to 
"Resolved" and resolution i mention like "not a problem" ,if it's correct then 
will you please assign both Jira to me.


was (Author: ankitraj):
[~hyukjin.kwon] , SPARK-29776 and SPARK-29853 both i resolved and resolution i 
mention like "not a problem" , will you please assign both Jira to me.

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-28 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17004447#comment-17004447
 ] 

Ankit Raj Boudh commented on SPARK-29776:
-

[~hyukjin.kwon] , SPARK-29776 and SPARK-29853 both i resolved and resolution i 
mention like "not a problem" , will you please assign both Jira to me.

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-29776.
-
Resolution: Not A Problem

As per the discussion old behaviour not need to change.

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-29853.
-
Resolution: Not A Problem

As per the discussion old behaviour not need to change.

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh reopened SPARK-29776:
-

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh reopened SPARK-29853:
-

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-29853.
-
Resolution: Won't Fix

As per the discussion , not require to change the behaviours. 

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh reopened SPARK-29853:
-

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-29853.
-
Resolution: Won't Fix

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh reopened SPARK-29853:
-

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-28 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-29776.
-
Resolution: Won't Fix

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29853) lpad returning empty instead of NULL for empty pad value

2019-12-26 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-29853.
-
Fix Version/s: 3.0.0
   Resolution: Duplicate

> lpad returning empty instead of NULL for empty pad value
> 
>
> Key: SPARK-29853
> URL: https://issues.apache.org/jira/browse/SPARK-29853
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Minor
> Fix For: 3.0.0
>
>
> Spark
> 0: jdbc:hive2://10.18.18.214:23040/default> SELECT lpad('hi', 5, '');
> ++--+
> | lpad(hi, 5, ) |
> ++--+
> | hi |
> ++--+
> 1 row selected (0.186 seconds)
> Hive:
> INFO : Concurrency mode is disabled, not creating a lock manager
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29776) rpad and lpad should return NULL when padstring parameter is empty

2019-12-26 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-29776:

Summary: rpad and lpad should return NULL when padstring parameter is empty 
 (was: rpad returning invalid value when parameter is empty)

> rpad and lpad should return NULL when padstring parameter is empty
> --
>
> Key: SPARK-29776
> URL: https://issues.apache.org/jira/browse/SPARK-29776
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: ABHISHEK KUMAR GUPTA
>Priority: Major
>
> As per rpad definition
>  rpad
>  rpad(str, len, pad) - Returns str, right-padded with pad to a length of len
>  If str is longer than len, the return value is shortened to len characters.
>  *In case of empty pad string, the return value is null.*
> Below is Example
> In Spark:
> {code}
> 0: jdbc:hive2://10.18.19.208:23040/default> SELECT rpad('hi', 5, '');
> ++
> | rpad(hi, 5, ) |
> ++
> | hi |
> ++
> {code}
> It should return NULL as per definition.
>  
> Hive behavior is correct as per definition it returns NULL when pad is empty 
> String
> INFO : Concurrency mode is disabled, not creating a lock manager
> {code}
> +---+
> | _c0 |
> +---+
> | NULL |
> +---+
> {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30328) Fail to write local files with RDD.saveTextFile when setting the incorrect Hadoop configuration files

2019-12-23 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002613#comment-17002613
 ] 

Ankit Raj Boudh commented on SPARK-30328:
-

Thank you [~tobe], i will analyse this issue and will update you

> Fail to write local files with RDD.saveTextFile when setting the incorrect 
> Hadoop configuration files
> -
>
> Key: SPARK-30328
> URL: https://issues.apache.org/jira/browse/SPARK-30328
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: chendihao
>Priority: Major
>
> We find that the incorrect Hadoop configuration files cause the failure of 
> saving RDD to local file system. It is not expected because we have specify 
> the local url and the API of DataFrame.write.text does not have this issue. 
> It is easy to reproduce and verify with Spark 2.3.0.
> 1.Do not set environment variable of `HADOOP_CONF_DIR`.
> 2.Install pyspark and run the local Python script. This should work and save 
> files to local file system.
> {code:java}
> from pyspark.sql import SparkSession
> spark = SparkSession.builder.master("local").getOrCreate()
> sc = spark.sparkContextrdd = sc.parallelize([1, 2, 3])
> rdd.saveAsTextFile("file:///tmp/rdd.text")
> {code}
> 3.Set environment variable of `HADOOP_CONF_DIR` and put the Hadoop 
> configuration files there. Make sure the format of `core-site.xml` is right 
> but it has an unresolved host name.
> 4.Run the same Python script again. If it try to connect HDFS and found the 
> unresolved host name, Java exception happens.
> We thinks `saveAsTextFile("file:///)` should not attempt to connect HDFS 
> whenever `HADOOP_CONF_DIR` is set or not. Actually the following code of 
> DataFrame will work with the same incorrect Hadoop configuration files.
> {code:java}
> from pyspark.sql import SparkSession
> spark = SparkSession.builder.master("local").getOrCreate()
> df = spark.createDataFrame(rows, ["attribute", "value"])
> df.write.parquet("file:///tmp/df.parquet")
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30328) Fail to write local files with RDD.saveTextFile when setting the incorrect Hadoop configuration files

2019-12-23 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002585#comment-17002585
 ] 

Ankit Raj Boudh commented on SPARK-30328:
-

@chendihao, can i check this issue ?

> Fail to write local files with RDD.saveTextFile when setting the incorrect 
> Hadoop configuration files
> -
>
> Key: SPARK-30328
> URL: https://issues.apache.org/jira/browse/SPARK-30328
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.0
>Reporter: chendihao
>Priority: Major
>
> We find that the incorrect Hadoop configuration files cause the failure of 
> saving RDD to local file system. It is not expected because we have specify 
> the local url and the API of DataFrame.write.text does not have this issue. 
> It is easy to reproduce and verify with Spark 2.3.0.
> 1.Do not set environment variable of `HADOOP_CONF_DIR`.
> 2.Install pyspark and run the local Python script. This should work and save 
> files to local file system.
> {code:java}
> from pyspark.sql import SparkSession
> spark = SparkSession.builder.master("local").getOrCreate()
> sc = spark.sparkContextrdd = sc.parallelize([1, 2, 3])
> rdd.saveAsTextFile("file:///tmp/rdd.text")
> {code}
> 3.Set environment variable of `HADOOP_CONF_DIR` and put the Hadoop 
> configuration files there. Make sure the format of `core-site.xml` is right 
> but it has an unresolved host name.
> 4.Run the same Python script again. If it try to connect HDFS and found the 
> unresolved host name, Java exception happens.
> We thinks `saveAsTextFile("file:///)` should not attempt to connect HDFS not 
> matter `HADOOP_CONF_DIR` is set. Actually the following code will work with 
> the same incorrect Hadoop configuration files.
> {code:java}
> from pyspark.sql import SparkSession
> spark = SparkSession.builder.master("local").getOrCreate()
> df = spark.createDataFrame(rows, ["attribute", "value"])
> df.write.parquet("file:///tmp/df.parquet")
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30108) Add robust accumulator for observable metrics

2019-12-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17001619#comment-17001619
 ] 

Ankit Raj Boudh commented on SPARK-30108:
-

[~hvanhovell], Thank you, during development of this feature i will take care 
of this point.

> Add robust accumulator for observable metrics
> -
>
> Key: SPARK-30108
> URL: https://issues.apache.org/jira/browse/SPARK-30108
> Project: Spark
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Herman van Hövell
>Priority: Major
>
> Spark accumulators reflect the work that has been done, and not the data that 
> has been processed. There are situations where one tuple can be processed 
> multiple times, e.g.: task/stage retries, speculation, determination of 
> ranges for global ordered, etc... For observed metrics we need the value of 
> the accumulator to be based on the data and not on processing.
> The current aggregating accumulator is already robust to some of these issues 
> (like task failure), but we need to add some additional checks to make sure 
> it is fool proof.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-20 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-30308:

Comment: was deleted

(was: [~vishwaskumar], 4.1.43.Final version we need to update ?)

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
> Fix For: 2.4.4
>
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000760#comment-17000760
 ] 

Ankit Raj Boudh edited comment on SPARK-30308 at 12/20/19 9:35 AM:
---

[~srowen] , please help me to close this issue


was (Author: ankitraj):
@srowen , please help me to close this issue

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
> Fix For: 2.4.4
>
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000760#comment-17000760
 ] 

Ankit Raj Boudh commented on SPARK-30308:
-

@srowen , please help me to close this issue

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
> Fix For: 2.4.4
>
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-20 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-30308:

Comment: was deleted

(was: need to mention as not a problem)

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
> Fix For: 2.4.4
>
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-20 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh reopened SPARK-30308:
-

need to mention as not a problem

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
> Fix For: 2.4.4
>
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-20 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh resolved SPARK-30308.
-
Fix Version/s: 2.4.4
   Resolution: Resolved

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
> Fix For: 2.4.4
>
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-20 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000756#comment-17000756
 ] 

Ankit Raj Boudh commented on SPARK-30308:
-

It's already updated to 4.1.42.Final as a part of SPARK-29445

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-30313:

Comment: was deleted

(was: [~vanzin] , i have reproduce the issue, i will submit the PR soon)

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000570#comment-17000570
 ] 

Ankit Raj Boudh commented on SPARK-30313:
-

[~vanzin] , i have reproduce the issue, i will submit the PR soon

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000550#comment-17000550
 ] 

Ankit Raj Boudh commented on SPARK-30313:
-

ok, i will try to reproduce it.

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-30313:

Comment: was deleted

(was: I think it's not an issue.)

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000548#comment-17000548
 ] 

Ankit Raj Boudh commented on SPARK-30313:
-

I think it's not an issue.

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000542#comment-17000542
 ] 

Ankit Raj Boudh commented on SPARK-30313:
-

[~vanzin], I run in my local system, it's getting passed.

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30300) Update correct string in UI for metrics when driver updates same metrics id as tasks.

2019-12-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000538#comment-17000538
 ] 

Ankit Raj Boudh commented on SPARK-30300:
-

[~nartal1], no problem finally we need to fix a bug, thank you for raising PR.

> Update correct string in UI for metrics when driver updates same metrics id 
> as tasks.
> -
>
> Key: SPARK-30300
> URL: https://issues.apache.org/jira/browse/SPARK-30300
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Web UI
>Affects Versions: 3.0.0
>Reporter: Niranjan Artal
>Priority: Major
>
> There is a bug in displaying of additional max metrics (stageID 
> (attemptID):task Id).
> If driver is updating the same metric which was updated by tasks and if the 
> drivers value exceeds max, then it is not captured. Need to capture this case 
> and update the UI accordingly.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Issue Comment Deleted] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Raj Boudh updated SPARK-30313:

Comment: was deleted

(was: it's failing in local also i will raise PR for this today)

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30313) Flaky test: MasterSuite.master/worker web ui available with reverseProxy

2019-12-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000524#comment-17000524
 ] 

Ankit Raj Boudh commented on SPARK-30313:
-

it's failing in local also i will raise PR for this today

> Flaky test: MasterSuite.master/worker web ui available with reverseProxy
> 
>
> Key: SPARK-30313
> URL: https://issues.apache.org/jira/browse/SPARK-30313
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Masiero Vanzin
>Priority: Major
>
> Saw this test fail a few times on PRs. e.g.:
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/115583/testReport/org.apache.spark.deploy.master/MasterSuite/master_worker_web_ui_available_with_reverseProxy/]
>  
> {noformat}
> Error Message
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
> Stacktrace
> sbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedDueToTimeoutException: The code passed to 
> eventually never returned normally. Attempted 43 times over 
> 5.064226577995 seconds. Last failure message: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/.
>   at 
> org.scalatest.concurrent.Eventually.tryTryAgain$1(Eventually.scala:432)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:439)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:391)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:308)
>   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:307)
>   at 
> org.apache.spark.deploy.master.MasterSuite.eventually(MasterSuite.scala:111)
>   at 
> org.apache.spark.deploy.master.MasterSuite.$anonfun$new$14(MasterSuite.scala:318)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ---
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: Server returned HTTP 
> response code: 500 for URL: 
> http://localhost:45395/proxy/worker-20191219134839-localhost-36054/json/
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at java.net.URL.openStream(URL.java:1045)
>   at scala.io.Source$.fromURL(Source.scala:144)
>   at scala.io.Source$.fromURL(Source.scala:134)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30308) Update Netty and Netty-all to address CVE-2019-16869

2019-12-19 Thread Ankit Raj Boudh (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000522#comment-17000522
 ] 

Ankit Raj Boudh commented on SPARK-30308:
-

[~vishwaskumar], 4.1.43.Final version we need to update ?

> Update Netty and Netty-all to address CVE-2019-16869
> 
>
> Key: SPARK-30308
> URL: https://issues.apache.org/jira/browse/SPARK-30308
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Security
>Affects Versions: 2.4.4
>Reporter: Vishwas
>Priority: Minor
>  Labels: security
>
> As per [CVE-2019-16869|http://www.cvedetails.com/cve/CVE-2019-16869/], netty 
> mishandled whitespace before the colon in HTTP headers (such as a 
> "Transfer-Encoding : chunked" line), which lead to HTTP request smuggling.
> This issue has been resolved in version 4.1.42.Final for both netty and 
> netty-all packages. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



  1   2   >