I added the following lines to my /etc/security/limits.conf, then restarted:
* hardnofile 1048576
* softnofile 1048576
That seems to have fixed the problem.
On Fri, Aug 5, 2016 at 1:10 PM, Jim Apple wrote:
> I
I restarted, ran bin/testdata/run-all.sh, but running list in hbase
shell still says:
ERROR: org.apache.hadoop.hbase.ipc.ServerNotRunningYetException:
Server is not running yet
at
org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2296)
at
The NN and DNs have 600-800 files open each, and my ulimit is 1024 per
process. On the machine as a whole, ls | wc -l is 1047067.
proc_nodemanager and proc_regionserver have a ton of open files: tens
of thousands each. For instance, nodemanager has 1200 fds pointing to
one of three different
One idea is to check your ulimit for file descriptors and run `lsof | grep
wc -l` to see if you for some reason exceeded the limit. Otherwise, a fresh
reboot might help to figure out if you somewhere have a spare process
hogging FDs.
On Sun, Jul 24, 2016 at 8:09 PM Jim Apple
The NN and DN logs are empty.
I bin/kill-all.sh at the beginning of this, so I assume that nothing
is taking them except for my little Impala work.
On Sun, Jul 24, 2016 at 8:03 PM, Bharath Vissapragada
wrote:
> Based on
>
> 16/07/24 18:36:08 WARN hdfs.BlockReaderFactory:
Based on
16/07/24 18:36:08 WARN hdfs.BlockReaderFactory: I/O error constructing
remote block reader.
java.net.SocketException: Too many open files
16/07/24 18:36:08 WARN hdfs.DFSClient: Failed to connect to
/127.0.0.1:31000 for block, add to deadNodes and continue.
java.net.SocketException: Too
Several thousand lines of things like
WARN shortcircuit.ShortCircuitCache: ShortCircuitCache(0x419c7df4):
failed to load 1073764575_BP-1490185442-127.0.0.1-1456935654337
java.lang.NullPointerException at
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitReplica.(ShortCircuitReplica.java:126)
...
Do you see something in the HMaster log? From the error it looks like the
Hbase master hasn't started properly for some reason.
On Mon, Jul 25, 2016 at 6:08 AM, Jim Apple wrote:
> I tried reloading the data with
>
> ./bin/load-data.py --workloads functional-query
>
> but
I tried reloading the data with
./bin/load-data.py --workloads functional-query
but that gave errors like
Executing HBase Command: hbase shell
load-functional-query-core-hbase-generated.create
16/07/24 17:19:39 INFO Configuration.deprecation: hadoop.native.lib is
deprecated. Instead, use
I'm having trouble with my HBase environment, and it's preventing me
from running bin/run-all-tests.sh. I am on Ubuntu 14.04. I have tried
this with a clean build, and I have tried unset LD_LIBRARY_PATH &&
bin/impala-config.sh, and I have tried ./testdata/bin/run-all.sh
Here is the error I get
10 matches
Mail list logo