I use HBase0.96 and CDH4.3.1.
I use Short-Circuit Local Read:
property
namedfs.client.read.shortcircuit/name
valuetrue/value/propertyproperty
namedfs.domain.socket.path/name
value/home/hadoop/cdh4-dn-socket/dn_socket/value/property
When one disk is bad, because the RegionServer open
Hi,
I am using cdh4.4-mr1 for my scenario with Jobtracker HA.
During a failover of Jobtracker, the client is not retrying.
The root cause is:
The method:
org.apache.hadoop.mapred.JobSubmissionProtocol.getSystemDir() doesn't throw any
exception and when client failover happens on this API call,
bcc'ed hadoop-user
Lei, perhaps hbase-user can help.
-- Forwarded message --
From: lei liu liulei...@gmail.com
Date: Thu, Feb 13, 2014 at 1:04 AM
Subject: umount bad disk
To: user@hadoop.apache.org
I use HBase0.96 and CDH4.3.1.
I use Short-Circuit Local Read:
property
Unsubscribe
See https://hadoop.apache.org/mailing_lists.html#User
On Thu, Feb 13, 2014 at 9:39 AM, Scott Kahler skah...@adknowledge.comwrote:
Unsubscribe
You may want to check this out:
http://stackoverflow.com/questions/323146/how-to-close-a-file-descriptor-from-another-process-in-unix-systems
Hiran Chaudhuri
System Support Programmer / Analyst
Business Development (DB)
Hosting Regional Services (BH)
Amadeus Data Processing GmbH
Berghamer
We have a dataset of ~8Milllion files about .5 to 2 Megs each. And we're
having trouble getting them analysed after building a har file.
The files are already in a pre-existing directory structure, with, two
nested set of dirs with 20-100 pdfs at the bottom of each leaf of the dir
tree.
LiuLei,
Using your example you can run umount -l /disk10 ( Lower case L ) on the disk
and it will lazy dismount and will not wait for a disconnect.
Thanks,
---
Brandon Freeman
System Engineer
Explorys, Inc.
8501 Carnegie Ave., Suite 200
Cleveland, OH 44106
Hi all,
My team has set up Hadoop 2.2.0. Having issues in loading the hadoop native
libraraies
We've got a pre-built version of libs. Unfortunately my GNU Version is 2.11.3
where the libraries require 2.12
It will be very helpful, if by any chance you have native libraries built using
Hi all,
We collected and analyzed JIRA tickets to investigate
the activities of Apache Hadoop Community in 2013.
http://ajisakaa.blogspot.com/2014/02/the-activities-of-apache-hadoop.html
We counted the number of the organizations, the lines
of code, and the number of the issues. As a result, we
I'm at Yarn 2.2.0 release, I configured 2 single-node clusters on my
laptop(just for POC and all port conflicts are resolved, and I can see NM
and RM is up, webUI shows everything is fine) and I also have a standalone
java application. The java application is a kind of job client, it will
submit
Try doing unmount -l
Sent from my iPhone
On Feb 13, 2014, at 11:10 AM, Arpit Agarwal aagar...@hortonworks.com
wrote:
bcc'ed hadoop-user
Lei, perhaps hbase-user can help.
-- Forwarded message --
From: lei liu liulei...@gmail.com
Date: Thu, Feb 13, 2014 at 1:04 AM
Hi Anfernee,
It sounds most likely that config somehow corrupts. So you have two sets of
config to start two YARN cluster separately, don't you? If you provide more
detail about how you config the two clusters, it's easy for the community
to understand your problem.
- Zhijie
On Thu, Feb 13,
Hi Zhijie,
I agree, what I'm doing in the standalone app is that the app loads the
first cluster Configuration(mapred-site.xml, yarn-site.xml) as its default
configuration, and then submit MR job with this configuration to the first
cluster, and after the job is finished, I will submit the second
hi,maillist:
i use scribe to receive data from app ,write to hadoop hdfs,when
the system in high concurrency connect
it will cause hdfs error like the following ,the incoming connect will be
blocked ,and the tomcat will die,
in dir user/hive/warehouse/dsp.db/request ,the file
I have a linux container that dies. The nodemanager logs only say:
WARN org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor:
Exception from container-launch :
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
at
Hi Experts,
I have been working on the JIRA
https://issues.apache.org/jira/browse/MAPREDUCE-4490 and attached
MAPREDUCE-4490.patch which could fix this jira. I would like to contribute
my patch to community, but encountered some issues.
MAPREDUCE-4490 is an issue on Hadoop-1.x versions, and my
Hi,
I am planning to install Hadoop and Hbase in a 2 node cluster.
I have Chosen 0.94.16 for Hbase ( the current stable version ).
I am confused about which hadoop version to choose from.
I see on Hadoop's download page , there are 2 stable series, one is 1.X and
other is 2.X series.
Which one
Hi,
I have Hbase and Hadoop setup in pseudo distributed mode in production.
Now i am planning to move from pseudo distributed mode to fully distributed
mode ( 2 node cluster).
My existing Hbase and Hadoop version are 1.1.2 and 0.94.7.
And i am planning to have full distributed mode with Hbase
Both Hadoop 1 and Hadoop 2 work. If you start from scratch you should probably
start with Hadoop 2.
Note that if you want to use Hadoop 2.2.x you need to change the protobuf
dependency in HBase's pom.xml to 2.5.
(there's a certain irony here that the protocol/library we use to get version
20 matches
Mail list logo