[jira] [Commented] (SPARK-27170) Better error message for syntax error with extraneous comma in the SQL parser

2019-03-17 Thread Wataru Yukawa (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794447#comment-16794447
 ] 

Wataru Yukawa commented on SPARK-27170:
---

>Any other query with hard-to-read parsing error message?
Currently, no
If I find other examples, please let you know.
But is ANSI mode available in spark 3.0?

https://issues.apache.org/jira/browse/SPARK-26976

> Better error message for syntax error with extraneous comma in the SQL parser
> -
>
> Key: SPARK-27170
> URL: https://issues.apache.org/jira/browse/SPARK-27170
> Project: Spark
>  Issue Type: Wish
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Wataru Yukawa
>Priority: Minor
>
> [~maropu], [~smilegator]
> It was great to talk with you in Hadoop / Spark Conference Japan 2019.
> Thanks in advance!
> I filed this issue which I talked with you at that time.
> We sometimes write a syntax error SQL with extraneous comma by mistake.
> For example, here is the SQL with an extraneous comma in line 2.
> {code}
> SELECT distinct
> ,a
> ,b
> ,c
> FROM ...' LIMIT 100
> {code}
> We have an error message in spark 2.4.0 but it's a little hard to understand 
> in my feeling because line number is wrong.
> {code}
> cannot resolve '`distinct`' given input columns: [...]; line 1 pos 7;
> 'GlobalLimit 100
> +- 'LocalLimit 100
> +- 'Project ['distinct, ...]
> +- Filter (...)
> +- SubqueryAlias ...
> +- HiveTableRelation ...
> {code}
> By the way, here is the error message in prestosql 305 and same sql.
> Line number is correct and I guess an error message is better than sparksql.
> {code}
> line 2:5: mismatched input ','. Expecting: '*', , 
> {code}
> If sparksql error message improves, it would be great.
> Thanks.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27170) Better error message for syntax error with extraneous comma in the SQL parser

2019-03-16 Thread Wataru Yukawa (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794194#comment-16794194
 ] 

Wataru Yukawa commented on SPARK-27170:
---

[~dkbiswal]
Thank you for your comment!
I completely understood what you mean.
Actually, if I remember correctly, presto tries to comply with ansi sql.
Of course, if sparksql supports for ansi mode parsing, I think it would be 
great.
Anyway, thanks!

> Better error message for syntax error with extraneous comma in the SQL parser
> -
>
> Key: SPARK-27170
> URL: https://issues.apache.org/jira/browse/SPARK-27170
> Project: Spark
>  Issue Type: Wish
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Wataru Yukawa
>Priority: Minor
>
> [~maropu], [~smilegator]
> It was great to talk with you in Hadoop / Spark Conference Japan 2019.
> Thanks in advance!
> I filed this issue which I talked with you at that time.
> We sometimes write a syntax error SQL with extraneous comma by mistake.
> For example, here is the SQL with an extraneous comma in line 2.
> {code}
> SELECT distinct
> ,a
> ,b
> ,c
> FROM ...' LIMIT 100
> {code}
> We have an error message in spark 2.4.0 but it's a little hard to understand 
> in my feeling because line number is wrong.
> {code}
> cannot resolve '`distinct`' given input columns: [...]; line 1 pos 7;
> 'GlobalLimit 100
> +- 'LocalLimit 100
> +- 'Project ['distinct, ...]
> +- Filter (...)
> +- SubqueryAlias ...
> +- HiveTableRelation ...
> {code}
> By the way, here is the error message in prestosql 305 and same sql.
> Line number is correct and I guess an error message is better than sparksql.
> {code}
> line 2:5: mismatched input ','. Expecting: '*', , 
> {code}
> If sparksql error message improves, it would be great.
> Thanks.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22506) Spark thrift server can not impersonate user in kerberos

2019-03-14 Thread Wataru Yukawa (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-22506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16793308#comment-16793308
 ] 

Wataru Yukawa commented on SPARK-22506:
---

Hi,

Spark thrift server can impersonate a user in our kerberized hadoop and Spark 
2.1.1(HDP-2.6.2.0) and the following setting
when I execute select query
{code:java}
hive.server2.enable.doAs=true
{code}
But it can't impersonate in create query case.
For example, if you execute the following query, 
/apps/hive/warehouse/hoge.db/piyo in HDFS is hive owner.
{code:java}
create table hoge.piyo(str string)
{code}
Thanks

> Spark thrift server can not impersonate user in kerberos 
> -
>
> Key: SPARK-22506
> URL: https://issues.apache.org/jira/browse/SPARK-22506
> Project: Spark
>  Issue Type: Improvement
>  Components: Deploy
>Affects Versions: 2.2.0
>Reporter: sydt
>Priority: Major
> Attachments: screenshot-1.png
>
>
> Spark thrift server can not impersonate user in kerberos environment.
> I launch spark thrift server in* yarn-client *mode by user *hive* ,which is 
> allowed to impersonate other user.
> User* jt_jzyx_project7* submit sql statement to query its own table located 
> in hdfs catalog: /user/jt_jzyx_project7, and happened errors:
> Permission denied: *user=hive*, access=EXECUTE, 
> inode=*"/user/jt_jzyx_project7*":hdfs:jt_jzyx_project7:drwxrwx---:user:g_dcpt_project1:rwx,group::rwx
> obviously, spark thrift server didn't proxy user: jt_jzyx_project7 in hdfs.
> And this happened task stage, which means it pass the hive authorization.
> !screenshot-1.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-27170) Better error message for syntax error with extraneous comma in the SQL parser

2019-03-14 Thread Wataru Yukawa (JIRA)
Wataru Yukawa created SPARK-27170:
-

 Summary: Better error message for syntax error with extraneous 
comma in the SQL parser
 Key: SPARK-27170
 URL: https://issues.apache.org/jira/browse/SPARK-27170
 Project: Spark
  Issue Type: Wish
  Components: SQL
Affects Versions: 2.4.0
Reporter: Wataru Yukawa


[~maropu], [~smilegator]

It was great to talk with you in Hadoop / Spark Conference Japan 2019.
Thanks in advance!
I filed this issue which I talked with you at that time.

We sometimes write a syntax error SQL with extraneous comma by mistake.
For example, here is the SQL with an extraneous comma in line 2.

{code}
SELECT distinct
,a
,b
,c
FROM ...' LIMIT 100
{code}

We have an error message in spark 2.4.0 but it's a little hard to understand in 
my feeling because line number is wrong.
{code}
cannot resolve '`distinct`' given input columns: [...]; line 1 pos 7;
'GlobalLimit 100
+- 'LocalLimit 100
+- 'Project ['distinct, ...]
+- Filter (...)
+- SubqueryAlias ...
+- HiveTableRelation ...
{code}

By the way, here is the error message in prestosql 305 and same sql.
Line number is correct and I guess an error message is better than sparksql.
{code}
line 2:5: mismatched input ','. Expecting: '*', , 
{code}

If sparksql error message improves, it would be great.

Thanks.
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org