Koji Noguchi created HADOOP-16839:
-
Summary: SparkLauncher does not read
SPARK_CONF_DIR/spark-defaults.conf
Key: HADOOP-16839
URL: https://issues.apache.org/jira/browse/HADOOP-16839
Project: Hadoop
Assuming you're using TextInputFormat, it sounds like
https://issues.apache.org/jira/browse/MAPREDUCE-773
In 0.21. Don't know about CDH.
Koji
On 12/27/11 2:00 AM, Niels Basjes ni...@basjes.nl wrote:
I would not expect this. I would expect behaviour that is independent of
the way the splits
Components: scripts
Affects Versions: 0.23.0
Reporter: Koji Noguchi
Assignee: Koji Noguchi
Priority: Trivial
$ hadoop dfs -ls
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
...
If we're still
Can we add
* HADOOP-6942 Ability for having user's classes take precedence over the
system classes for tasks' classpath
It's pretty bad in a sense that it's not on trunk but 0.20.204 and CDH3 both
have this *using different parameters*.
Koji
On 8/3/11 2:14 PM, Matt Foley
except
HADOOP-6386 and HADOOP-6428.
causes a rolling port side effect in TT
I remember bugging Cos and Rob to revert HADOOP-6386.
https://issues.apache.org/jira/browse/HADOOP-6760?focusedCommentId=12867342;
page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#commen
t-12867342
Hi Shivani,
You probably don't want to ask m45 specific questions on hadoop.apache mailing
list.
Try
% hadoop queue -showacls
It should show which queues you're allowed to submit. If it doesn't give you
any queues, you need to request one.
Koji
On 2/9/11 9:10 PM, Shivani Rao
If you want to decommission a datanode,
http://hadoop.apache.org/common/docs/r0.20.0/hdfs_user_guide.html#DFSAdmin+Command
-refreshNodes briefly explains how it works.
Koji
On 1/31/11 4:35 AM, Segel, Mike mse...@navteq.com wrote:
James,
Remove the node without stopping what?
If you mean
[
https://issues.apache.org/jira/browse/HADOOP-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Koji Noguchi reopened HADOOP-6857:
--
I think this number(raw usage) would be helpful. Not sure whether this should
be in -du
Components: io
Reporter: Koji Noguchi
Assignee: Koji Noguchi
Priority: Minor
HADOOP-5879 added a compression level for codecs, but DefaultCodec seems to
ignore this conf value when initialized.
This is only when codec is first created. reinit() probably
[
https://issues.apache.org/jira/browse/HADOOP-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Koji Noguchi resolved HADOOP-6202.
--
Resolution: Duplicate
This is reported in HADOOP-6097.
harchive: Har doesn't work
10 matches
Mail list logo