Hello Francois,

I tried this on latest master (1.7.0) and on a MapR cluster. Was able to
query an individual file with just read permissions:

> select c_first_name from dfs.tmp.`customer.parquet` limit 1;
+---------------+
| c_first_name  |
+---------------+
| Javier        |
+---------------+
1 row selected (1.1 seconds)


# hadoop fs -ls /tmp/customer.parquet
-r--r--r--   3 root root    7778841 2016-04-27 13:50 /tmp/customer.parquet

Note: I did not use impersonation or have security enabled on my cluster.

-Abhishek

On Wed, Apr 27, 2016 at 12:24 PM, François Méthot <[email protected]>
wrote:

> Has anyone experienced the same issue?  We are Using HDFS managed by
> Cloudera Manager.
>
> A simple upgrade to 1.6 caused queries done directly on individual files to
> fail with "Permission denied: user=drill, access=EXECUTE" error.
> Also, Using filter with "dir0" file structure also causes the issue to
> happen.
>    ex: select col1 from hdfs.`/datasrc/` where dir0>= 1234567;
>
> We ended up giving "execute" access to all the data file.
>
> We would really like to know if this is the intend to have drill to expect
> execute access permission on data files.
>
> Thanks
>
> On Tue, Apr 26, 2016 at 11:23 AM, François Méthot <[email protected]>
> wrote:
>
> > Hi,
> >
> >   We just switched to version 1.6. Using java 1.7_60
> >
> > We noticed that we can no longer query individual files stored in HDFS
> > from CLI and WebUI.
> >
> > select col1 from hdfs.`/data/file1.parquet`;
> >
> > Error: SYSTEM ERROR: RemoteException: Permission denied: user=drill,
> > access=EXECUTE inode=/data/file1.parquet":mygroup:drill:-rw-rw-r--
> >
> > If we give execution permission to the file
> >
> > hdfs fs -chmod +x /data/file1.parquet
> >
> > Then the query works.
> >
> > If we query the parent folder (hdfs.`/data/`), the query works as well.
> >
> > Is it the expected behavior in 1.6?
> >
> > Francois
> >
> >
>

Reply via email to