Thanks Abhishek!
Will check and update on it

On Mon, 2 Jul 2018 at 23:57, Abhishek Girish <[email protected]> wrote:

> Hey Divya,
>
> Here is one way to check if all nodes have the same UID/GID:
>
> clush -a 'cat /etc/passwd | grep -i user1'
> Node1: user1:x:5000:5000::/home/user1:/bin/bash
> Node2: user1:x:5000:5000::/home/user1:/bin/bash
> Node3: user1:x:6000:6000::/home/user1:/bin/bash
>
>
> You can update the UID and GID using usermod and groupmod commands. Make
> sure to restart your DFS and Drill services after that
>
> For example, on Node3,
>
> usermod -u 5000 user1
> groupmod -g 5000 user1
>
>
>
> Regards,
> Abhishek
>
> On Mon, Jul 2, 2018 at 2:58 AM Divya Gehlot <[email protected]>
> wrote:
>
> > Hi Abhishek,
> > Thanks for the prompt response !
> > Yes I have Big Data Cluster and Apache Drill is part of it and security
> is
> > plain authentication and connected through AD .
> > And Recently I have added 3 more nodes to the cluster.
> > How do I ensure that all the nodes have same UID + GID , which you
> > mentioned in the email?
> >
> > Thanks,
> > Divya
> >
> >
> >
> > On Mon, 2 Jul 2018 at 11:37, Abhishek Girish <[email protected]> wrote:
> >
> > > Hey Divya,
> > >
> > > I have a suspicion: There is chance you have a distributed Drill
> > > environment and not all of the nodes have the same user (with same UID
> +
> > > GID). And your dataset isn't large like you mentioned, so not all
> > Drillbits
> > > are always involved in the query execution. So you might intermittently
> > see
> > > such failures if one of the Drillbit working on this query doesn't have
> > the
> > > right user and hence the required access to the path on DFS. Can you
> > please
> > > check and let us know?
> > >
> > > Regards,
> > > Abhishek
> > >
> > > On Sun, Jul 1, 2018 at 7:16 PM Divya Gehlot <[email protected]>
> > > wrote:
> > >
> > > > Hi,
> > > > When I checked the error in Profile section of the ran query :
> > > >
> > > > Apache Drill
> > > >
> > > > "error": "SYSTEM ERROR: Drill Remote Exception\n\n",
> > > >     "verboseError": "SYSTEM ERROR: Drill Remote Exception\n\n\n\n",
> > > >
> > > > When I turned on Verbose true I could see the below error when I run
> > the
> > > > query :
> > > > org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR:
> > > > AccessControlException: User <userid>(user id 829131620) does not
> have
> > > > access to /path/to/directory/peoplecount/2018_06_29/17/0_0_0.parquet
> > > > Fragment 0:0 [Error Id: 148a32c7-3af4-4929-982f-3c06ef505eed on
> > > > <DNSAddress>:31010]
> (org.apache.hadoop.security.AccessControlException)
> > > > User <userid>(user id 829131620) does not have access to
> > > > /path/to/directory/peoplecount/2018_06_29/17/0_0_0.parquet
> > > > com.mapr.fs.MapRClientImpl.create():233
> > > > com.mapr.fs.MapRFileSystem.create():806
> > > > com.mapr.fs.MapRFileSystem.create():899
> > > > org.apache.hadoop.fs.FileSystem.createNewFile():1192
> > > > org.apache.drill.exec.store.StorageStrategy.createFileAndApply():122
> > > >
> org.apache.drill.exec.store.parquet.ParquetRecordWriter.endRecord():374
> > > > org.apache.drill.exec.store.EventBasedRecordWriter.write():68
> > > > org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext():106
> > > > org.apache.drill.exec.record.AbstractRecordBatch.next():162
> > > > org.apache.drill.exec.record.AbstractRecordBatch.next():119
> > > > org.apache.drill.exec.record.AbstractRecordBatch.next():109
> > > > org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
> > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134
> > > > org.apache.drill.exec.record.AbstractRecordBatch.next():162
> > > > org.apache.drill.exec.physical.impl.BaseRootExec.next():104
> > > >
> > >
> >
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
> > > > org.apache.drill.exec.physical.impl.BaseRootExec.next():94
> > > > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():232
> > > > org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():226
> > > > java.security.AccessController.doPrivileged():-2
> > > > javax.security.auth.Subject.doAs():422
> > > > org.apache.hadoop.security.UserGroupInformation.doAs():1633
> > > > org.apache.drill.exec.work.fragment.FragmentExecutor.run():226
> > > > org.apache.drill.common.SelfCleaningRunnable.run():38
> > > > java.util.concurrent.ThreadPoolExecutor.runWorker():1149
> > > > java.util.concurrent.ThreadPoolExecutor$Worker.run():624
> > > > java.lang.Thread.run():748
> > > >
> > > > Thanks,
> > > > Divya
> > > >
> > > > On Fri, 29 Jun 2018 at 18:41, Divya Gehlot <[email protected]>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > > At times I am getting error whlile CTAS and it doesn't happen all
> the
> > > > time
> > > > > like next run for 18 hours it will create the table .
> > > > > Here are the details :
> > > > > ls -ltr /path/to/directory/parquetfiles/2018_06_29
> > > > > total 9
> > > > > drwxrwxr-x 2 <userid> <userid> 1 Jun 28 12:05 00
> > > > > drwxrwxr-x 2 <userid> <userid> 1 Jun 28 13:05 01
> > > > >
> > > > > Error Logs :
> > > > > SYSTEM ERROR: AccessControlException: User <userid>(user id
> > 829131620)
> > > > > does not have access to
> > > > > /path/to/directory/parquetfiles/2018_06_29/17/0_0_0.parquet
> > > > >
> > > > > Appreciate the help !
> > > > >
> > > > > Thanks,
> > > > > Divya
> > > > >
> > > >
> > >
> >
>

Reply via email to