If it's job tracker you use, it's MR1.
On Feb 1, 2014 12:23 AM, "Keith Wiley" wrote:
> Hmmm, okay. I know it's running CDH4 4.4.0, as but for whether it was
> specifically configured with MR1 or MR2 (is there a distinction between MR2
> and Yarn?) I'm not absolutely certain. I know that the clu
HDFS-5804 (https://issues.apache.org/jira/browse/HDFS-5804) allows the
NFS gateway to communicate with a secured Hadoop cluster using
Kerberos. HDFS-5086 will add RPCSEC_GSS authentication support to NFS
gateway.
Thanks,
-Jing
On Fri, Jan 31, 2014 at 10:06 AM, Gerlach, Ryan K
wrote:
> Hi, anyone
Hmmm, okay. I thought that logic all acted at the level of "slots". I didn't
realize it could make "node" distinctions. Thanks for the tip.
On Jan 29, 2014, at 05:18 , java8964 wrote:
> Or you can implement your own InputSplit and InputFormat, which you can
> control how to send tasks to whi
Hmmm, okay. I know it's running CDH4 4.4.0, as but for whether it was
specifically configured with MR1 or MR2 (is there a distinction between MR2 and
Yarn?) I'm not absolutely certain. I know that the cluster "behaves" like the
MR1 clusters I've worked with for years (I interact with the job t
Hi,
Here's my situation, currently I have hadoop v1 cluster and I made some
changes to CapacityTaskScheduler to meet my requirement as below
My MR job is fetching data from HDFS(the data is generated by someone else)
and put the data on a certain node in the cluster, I need the data
distribution
Hi, anyone know if the NFS HDFS gateway is currently supported on secure
clusters using Kerberos for Hadoop 2.2.0? We are using HDP 2.0 and looking to
use NFS gateway.
Thanks for any guidance!
Ryan
Notice: This e-mail message, together with any attachments, contains
information of Merck & Co.,
I am using default values for both. My version is 1.1.2, and the default
value for "dfs.block.size" (67108864) is evenly divisible by 512.
However, the default value online reference for my version (
http://hadoop.apache.org/docs/r1.1.2/hdfs-default.html) doesn't have any
checksum related settings
Hi Tom,
My hint is your BLOCKSIZE should be multiple of CRC. Check your property
dfs.block.size - convert it into bytes, then divide it with the checksum
value that is set, usually its dfs.bytes-per-checksum property that tells
this value or you can get the checksum value from the error message yo
Hi all,
After changing the configuration value of yarn-site.xml, can I make the
modifications in effect without restarting all daemons? More specifically,
after I modify "yarn.nodemanager.resource.memory-mb" value, I want to make
the change in-effect without restarting node manager.
In addition, c
What is the right way to use the "-crc" option with hadoop dfs -copyToLocal?
Is this the wrong list?
--Tom
On Tue, Jan 28, 2014 at 11:53 AM, Tom Brown wrote:
> I am archiving a large amount of data out of my HDFS file system to a
> separate shared storage solution (There is not much HDFS spac
Hi!
We are using following version of Hadoop software: 2.0.0-cdh4.1.1.
HDFS HA configuration is applied.
I would like to ask if there are any difference or possible side effects of
backing up namenode data directory.
It is ok to just copy NN data directory from active NN to another host,
what a
Hi there,
I am trying to solve a problem. My client run as a server. And was trying to
make my client aware about the fact the resource manager is down but I could
not figure out. The reason is that the call : yarnClient.createApplication();
never return when the resource manager is down. Howe
Correcting typo error.
*dfs.namenode.http-address*
*Thanks*
On Fri, Jan 31, 2014 at 7:25 PM, Jitendra Yadav
wrote:
> Can you please change below property and restart your cluster again?
>
> FROM:
>
> dfs.http.address
>
>
> TO:
> dfs.namenode.http-addres
>
> Thanks
> Jitendra
>
>
> On Fri,
Can you please change below property and restart your cluster again?
FROM:
dfs.http.address
TO:
dfs.namenode.http-addres
Thanks
Jitendra
On Fri, Jan 31, 2014 at 7:07 PM, Stuti Awasthi wrote:
> Hi Jitendra,
>
>
>
> I realized that some days back ,my cluster was down due to power failur
Hi,
Please post the output of dfs report command, this could help us to
understand cluster health.
# *hadoop dfsadmin -report*
Thanks
Jitendra
On Fri, Jan 31, 2014 at 6:44 PM, Stuti Awasthi wrote:
> Hi All,
>
>
>
> I am suddenly started facing issue on Hadoop Cluster. Seems like HTTP
> requ
Hi All,
I am suddenly started facing issue on Hadoop Cluster. Seems like HTTP request
at port 50070 on dfs is not working properly.
I have an Hadoop cluster which is operating from several days. Recently we are
also not able to see dfshealth.jsp page from webconsole.
Problems :
1.
http://:5007
It shouldn't create any issue, make sure you have all the dependent files
available on HDFS.
Thanks
Jitendra
On Fri, Jan 31, 2014 at 4:52 PM, Jyoti Yadav wrote:
> Thanks Jitendra..
> If i format, will it create any side effect to giraph job execution??I
> think it wont create any problem but I
Hi,
I have set up hbase in pseudo distributed mode.
I keep on getting below exceptions in data node log.
Is it a problem ?
( Hadoop version - 1.1.2 , Hbase version - 0.94.7 )
Please help.
java.net.SocketTimeoutException: 48 millis timeout while waiting for
channel to be ready for write. ch
Thanks Jitendra..
If i format, will it create any side effect to giraph job execution??I
think it wont create any problem but I am getting little bit scared.
Thanks
On Fri, Jan 31, 2014 at 4:22 PM, Jitendra Yadav
wrote:
> Hi Jyoti,
>
> That's right you will lose all the HDFS data, therefore yo
Hi Jyoti,
That's right you will lose all the HDFS data, therefore you need take
backup of your critical data from HDFS to some other place. If you are
using Logical volumes then it would better to add more space on the
particular volume/mount point.
Regards
Jitendra
On Fri, Jan 31, 2014 at 4:10
Hi folks..
I have some doubt while thinking of formatting of namenode..
Actually i am doing my project in Apache Giraph which makes use of hadoop
environment to run the job..
While running various job,i noticed that
/app/hadoop/tmp/dfs/name/data
directory is almost full..
Should i format my
21 matches
Mail list logo