Oops, I didn't know stripping attachments.

I'm hosting the file on my google drive.

tserver.log
https://drive.google.com/open?id=0B0ffj_ngVZxuaHJQYUtDY3doYm8

master.log
https://drive.google.com/open?id=0B0ffj_ngVZxuMEk4MHJVQzVWZXc


Thank you for advice,

Takashi

2016-12-05 22:00 GMT+09:00 Josh Elser <els...@apache.org>:
> Apache mailing lists strip attachments. Please host the files somewhere and
> provide a link to them.
>
> On Dec 4, 2016 20:54, "Takashi Sasaki" <tsasaki...@gmail.com> wrote:
>>
>> Hello,
>>
>> I'm sorry to take a few wrong infomation at first post.
>>
>> I asked the project members again about the problem.
>> Master server did not throw AccessControlException.
>>
>> Actually, TabletServer threw AccessControlException.
>> And, the stack trace was missing words, wrong path.
>>
>> Correct full stack trace was line 52 in attached file "tserver.log".
>> And I attache "master.log" for your reference.
>>
>> Unfortunately, I could not still get debug log.
>>
>> Thank you for your support,
>> Takashi
>>
>>
>> 2016-12-04 18:33 GMT+09:00 Takashi Sasaki <tsasaki...@gmail.com>:
>> > Hello, Christopher
>> >
>> >>The stack trace doesn't include anything from Accumulo, so it's not
>> >> clear where in the Accumulo code this occurred. Do you have the full stack
>> >> trace?
>> > Yes, I understand the stack trace isn't including from Accumulo.
>> > I don't have full stack trace now, but I will try to find it.
>> >
>> > In additon, I use Accumulo on AWS EMR cluster for Enterprise
>> > Production System, so log level isn't debug, becase of disk capacity
>> > problem.
>> > I will try to reproduce with debug log level.
>> >
>> > Thank you for your reply,
>> > Takashi
>> >
>> > 2016-12-04 18:00 GMT+09:00 Christopher <ctubb...@apache.org>:
>> >> The stack trace doesn't include anything from Accumulo, so it's not
>> >> clear
>> >> where in the Accumulo code this occurred. Do you have the full stack
>> >> trace?
>> >>
>> >> In particular, it's not clear to me that there should be a directory
>> >> called
>> >> failed/da at that location, nor is it clear why Accumulo would be
>> >> trying to
>> >> check for the execute permission on it, unless it's trying to recurse
>> >> into a
>> >> directory. There is one part of the code where, if the directory exists
>> >> when
>> >> log recovery begins, it may try to do a recursive delete, but I can't
>> >> see
>> >> how this location would have been created by Accumulo. If that is the
>> >> case,
>> >> then it should be safe to manually delete this directory and its
>> >> contents.
>> >> The failed marker should be a regular file, though, and should not be a
>> >> directory with another directory called "da" in it. So, I can't see how
>> >> this
>> >> was even created, unless by an older version or another program.
>> >>
>> >> The only way I can see this occurring is if you recently did an
>> >> upgrade,
>> >> while Accumulo had not yet finished outstanding log recoveries from a
>> >> previous shutdown, AND the previous version did something different
>> >> than
>> >> 1.7.2. If that was the case, then perhaps the older version could have
>> >> created this problematic directory. It seems unlikely, though...
>> >> because
>> >> directories are usually not created without the execute bit... and the
>> >> error
>> >> message looks like a directory missing that bit.
>> >>
>> >> It's hard to know more without seeing the full stack trace with the
>> >> relevant
>> >> accumulo methods included. It might also help to see the master debug
>> >> logs
>> >> leading up to the error.
>> >>
>> >> On Sun, Dec 4, 2016 at 2:35 AM Takashi Sasaki <tsasaki...@gmail.com>
>> >> wrote:
>> >>>
>> >>> I use Accumulo-1.7.2 with Haddop2.7.2 and ZooKeeper 3.4.8
>> >>>
>> >>> Master server suddenly throw AccessControlException.
>> >>>
>> >>> java.io.IOException:
>> >>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >>> user=accumulo, access=EXECUTE,
>> >>>
>> >>>
>> >>> inode="/accumulo/recovery/603194f3-dd41-44ed-8ad6-90d408149952/failed/da":accumulo:accumulo:-rw-r--r--
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
>> >>>  at org.apache.hadoop.hdfs.server.namenode.FSDirSt
>> >>>  at AndListingOp.getFileInfo(FSDirSt
>> >>>  at AndListingOp.java:108)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3855)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1011)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl
>> >>>  at orPB.getFileInfo(ClientNamenodeProtocolServerSideTransl
>> >>>  at orPB.java:843)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>> >>>  at
>> >>>
>> >>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>> >>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>> >>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>> >>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>> >>>  at java.security.AccessController.doPrivileged(N
>> >>>  at ive Method)
>> >>>  at javax.security.auth.Subject.doAs(Subject.java:422)
>> >>>  at org.apache.hadoop.security.UserGroupInform
>> >>>  at ion.doAs(UserGroupInform
>> >>>  at ion.java:1657)
>> >>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>> >>>
>> >>>
>> >>> How can I solve this Exception?
>> >>>
>> >>>
>> >>> Thank you,
>> >>> Takashi.

Reply via email to