Should have included that... It does seem that tserver is running as hduser
as well. See below:

*$hadoop version*
Hadoop 2.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
>From source with checksum 79e53ce7994d1628b240f09af91e1af4
This command was run using
/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar
*$jps -v*
*7930* Main -Dapp=tserver -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true
-Xmx2g -Xms2g -XX:NewSize=1G -XX:MaxNewSize=1G -XX:OnOutOfMemoryError=kill
-9 %p
-Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl
-Djava.library.path=/usr/local/hadoop/lib/native
-Dorg.apache.accumulo.core.home.dir=/usr/local/accumulo
-Dhadoop.home.dir=/usr/local/hadoop
-Dzookeeper.home.dir=/usr/share/zookeeper
*$ ps -al | grep 7930*
0 S  1001  7930     1  8  80   0 - 645842 futex_ pts/2   00:00:36 java
*hduser@accumulo:/home$ id -u hduser*
1001

Also pardon the mixup above in hdfs /accumulo*0 *as I was trying a fresh
hdfs folder while taking debugging notes for my first email. The same
problem occurred.

Thanks for any help,

Mike

On Mon, Jan 5, 2015 at 7:08 PM, John Vines <[email protected]> wrote:

> And can you validate the user the tserver process is running as?
>
> On Mon, Jan 5, 2015 at 7:07 PM, John Vines <[email protected]> wrote:
>
>> What version of hadoop?
>>
>> On Mon, Jan 5, 2015 at 6:50 PM, Mike Atlas <[email protected]> wrote:
>>
>>> Hello,
>>>
>>> I'm running Accumulo 1.5.2, trying to test out the GeoMesa
>>> <http://www.geomesa.org/2014/05/28/geomesa-quickstart/> family of
>>> spatio-temporal iterators using their quickstart demonstration tool. I
>>> think I'm not making progress due to my Accumulo setup, though, so can
>>> someone validate that all looks good from here?
>>>
>>> start-all.sh output:
>>>
>>> hduser@accumulo:~$ $ACCUMULO_HOME/bin/start-all.sh
>>> Starting monitor on localhost
>>> Starting tablet servers .... done
>>> Starting tablet server on localhost
>>> 2015-01-05 21:37:18,523 [server.Accumulo] INFO : Attempting to talk to 
>>> zookeeper
>>> 2015-01-05 21:37:18,772 [server.Accumulo] INFO : Zookeeper connected and 
>>> initialized, attemping to talk to HDFS
>>> 2015-01-05 21:37:19,028 [server.Accumulo] INFO : Connected to HDFS
>>> Starting master on localhost
>>> Starting garbage collector on localhost
>>> Starting tracer on localhost
>>>
>>> hduser@accumulo:~$
>>>
>>>
>>> I do believe my HDFS is set up correctly:
>>>
>>> hduser@accumulo:/home/ubuntu/geomesa-quickstart$ hadoop fs -ls /accumulo
>>> Found 5 items
>>> drwxrwxrwx   - hduser supergroup          0 2014-12-10 01:04 
>>> /accumulo/instance_id
>>> drwxrwxrwx   - hduser supergroup          0 2015-01-05 21:22 
>>> /accumulo/recovery
>>> drwxrwxrwx   - hduser supergroup          0 2015-01-05 20:14 
>>> /accumulo/tables
>>> drwxrwxrwx   - hduser supergroup          0 2014-12-10 01:04 
>>> /accumulo/version
>>> drwxrwxrwx   - hduser supergroup          0 2014-12-10 01:05 /accumulo/wal
>>>
>>>
>>> However, when I check the Accumulo monitor logs, I see these errors
>>> post-startup:
>>>
>>> java.io.IOException: Mkdirs failed to create directory 
>>> /accumulo/recovery/15664488-bd10-4d8d-9584-f88d8595a07c/part-r-00000
>>>     java.io.IOException: Mkdirs failed to create directory 
>>> /accumulo/recovery/15664488-bd10-4d8d-9584-f88d8595a07c/part-r-00000
>>>             at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:264)
>>>             at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:103)
>>>             at 
>>> org.apache.accumulo.server.tabletserver.log.LogSorter$LogProcessor.writeBuffer(LogSorter.java:196)
>>>             at 
>>> org.apache.accumulo.server.tabletserver.log.LogSorter$LogProcessor.sort(LogSorter.java:166)
>>>             at 
>>> org.apache.accumulo.server.tabletserver.log.LogSorter$LogProcessor.process(LogSorter.java:89)
>>>             at 
>>> org.apache.accumulo.server.zookeeper.DistributedWorkQueue$1.run(DistributedWorkQueue.java:101)
>>>             at 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>             at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>             at 
>>> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>             at 
>>> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>             at java.lang.Thread.run(Thread.java:745)
>>>
>>>
>>> I don't really understand - I started accumulo as the hduser, which is
>>> the same user that has access to the HDFS directory /accumulo/recovery,
>>> and it looks like the directory was created actually, except for the last
>>> directory (part-r-0000):
>>>
>>> hduser@accumulo:~$ hadoop fs -ls /accumulo0/recovery/
>>> Found 1 items
>>> drwxr-xr-x   - hduser supergroup          0 2015-01-05 22:11 
>>> /accumulo/recovery/87fb7aac-0274-4aea-8014-9d53dbbdfbbc
>>>
>>>
>>> I'm not out of physical disk space:
>>>
>>> hduser@accumulo:~$ df -h
>>> Filesystem      Size  Used Avail Use% Mounted on
>>> /dev/xvda1     1008G  8.5G  959G   1% /
>>>
>>>
>>> What could be going on here? Any ideas on something simple I could have
>>> missed?
>>>
>>> Thanks,
>>> Mike
>>>
>>
>>
>

Reply via email to