[jira] [Created] (HADOOP-7597) dfs -chmod does not work as expected

2011-08-31 Thread XieXianshan (JIRA)
dfs -chmod does not work as expected


 Key: HADOOP-7597
 URL: https://issues.apache.org/jira/browse/HADOOP-7597
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.24.0
Reporter: XieXianshan
Assignee: XieXianshan
Priority: Trivial
 Fix For: 0.24.0


The operator = of chmod should have the top priority,i.e,the rest operators 
should be omitted if the first operator is =.
For example:
# hdfs dfs -ls /user/
dr--r--r--   - root supergroup  0 2011-08-31 19:42 /user/hadoop
# hdfs dfs -chmod =+w /user/hadoop
# hdfs dfs -ls /user/
d-w--w--w-   - root supergroup  0 2011-08-31 19:42 /user/hadoop



--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: ERROR building latest trunk for Hadoop project

2011-08-31 Thread Robert Evans
I ran into the same error with mvn compile.  There are some issues with 
dependency resolution in mvn and you need to run

mvn test -DskipTests

To compile the code.

--Bobby


On 8/30/11 7:21 AM, Praveen Sripati praveensrip...@gmail.com wrote:

Rerun the build with the below options and see if you can get more
information to solve this.

 [ERROR] To see the full stack trace of the errors, re-run Maven with the
-e switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.

Thanks,
Praveen

On Tue, Aug 30, 2011 at 12:16 PM, A BlueCoder bluecoder...@gmail.comwrote:

 Hi, I have checked out the HEAD version of Hadoop trunk from svn, and run
 into the following errors when I do 'mvn compile'.

 Can some one shed some light on what goes wrong?

 Thanks a lot,

 B.C. 008

 NB, I have separately built and installed ProtocolBuffer package:
 protocobuf-2.4.1 successfully.

 
 screenshot
 --
 [INFO]
 [INFO] --- maven-antrun-plugin:1.6:run
 (create-protobuf-generated-sources-directory) @ hadoop-yarn-c
 ommon ---
 [INFO] Executing tasks

 main:
[mkdir] Created dir:
 C:\temp\hadoop_trunk1\hadoop-mapreduce-project\hadoop-yarn\hadoop-yarn-comm
 on\target\generated-sources\proto
 [INFO] Executed tasks
 [INFO]
 [INFO] --- exec-maven-plugin:1.2:exec (generate-sources) @
 hadoop-yarn-common ---
 [INFO]
 [INFO] --- exec-maven-plugin:1.2:exec (generate-version) @
 hadoop-yarn-common ---
 [INFO]
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Project POM . SUCCESS [3.265s]
 [INFO] Apache Hadoop Annotations . SUCCESS [0.766s]
 [INFO] Apache Hadoop Project Dist POM  SUCCESS [0.000s]
 [INFO] Apache Hadoop Assemblies .. SUCCESS [0.156s]
 [INFO] Apache Hadoop Alfredo . SUCCESS [0.578s]
 [INFO] Apache Hadoop Common .. SUCCESS
 [52.877s]
 [INFO] Apache Hadoop Common Project .. SUCCESS [0.015s]
 [INFO] Apache Hadoop HDFS  SUCCESS [9.376s]
 [INFO] Apache Hadoop HDFS Project  SUCCESS [0.000s]
 [INFO] hadoop-yarn-api ... SUCCESS
 [20.985s]
 [INFO] hadoop-yarn-common  FAILURE [0.500s]
 [INFO] hadoop-yarn-server-common . SKIPPED
 [INFO] hadoop-yarn-server-nodemanager  SKIPPED
 [INFO] hadoop-yarn-server-resourcemanager  SKIPPED
 [INFO] hadoop-yarn-server-tests .. SKIPPED
 [INFO] hadoop-yarn-server  SKIPPED
 [INFO] hadoop-yarn ... SKIPPED
 [INFO] hadoop-mapreduce-client-core .. SKIPPED
 [INFO] hadoop-mapreduce-client-common  SKIPPED
 [INFO] hadoop-mapreduce-client-shuffle ... SKIPPED
 [INFO] hadoop-mapreduce-client-app ... SKIPPED
 [INFO] hadoop-mapreduce-client-hs  SKIPPED
 [INFO] hadoop-mapreduce-client-jobclient . SKIPPED
 [INFO] hadoop-mapreduce-client ... SKIPPED
 [INFO] hadoop-mapreduce .. SKIPPED
 [INFO] Apache Hadoop Main  SKIPPED
 [INFO]
 
 [INFO] BUILD FAILURE
 [INFO]
 
 [INFO] Total time: 1:29.940s
 [INFO] Finished at: Mon Aug 29 23:41:34 PDT 2011
 [INFO] Final Memory: 11M/40M
 [INFO]
 
 [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec
 (generate-version) on pr
 oject hadoop-yarn-common: Command execution failed. Cannot run program
 scripts\saveVersion.sh (in
 directory

 C:\temp\hadoop_trunk1\hadoop-mapreduce-project\hadoop-yarn\hadoop-yarn-common):
 CreatePr
 ocess error=2, The system cannot find the file specified - [Help 1]
 [ERROR]
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR]
 [ERROR] For more information about the errors and possible solutions,
 please
 read the following arti
 cles:
 [ERROR] [Help 1]
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR]
 [ERROR] After correcting the problems, you can resume the build with the
 command
 [ERROR]   mvn goals -rf :hadoop-yarn-common




RE: [VOTE] Should we release 0.20.204.0-rc3?

2011-08-31 Thread Eric Payne
+1 (non-binding)
 
I downloaded the patch and installed it on a 10-node cluster.
 
I successfully ran randomwriter twice and the following 2 SLive tests:
 
hadoop --config $HADOOP_CONF_DIR org.apache.hadoop.fs.slive.SliveTest \
    -appendSize 1,67108864 -append 0,uniform -baseDir /user/$USER/S-Live \
    -blockSize 67108864,67108864 -create 0,uniform -delete 20,uniform -dirSize 
16 \
    -duration 300 -files 1024 -ls 20,uniform -maps 20 -mkdir 20,uniform -ops 
1 \
    -packetSize 65536 -readSize 1,4294967295 -read 20,uniform -reduces 5 \
    -rename 20,uniform -replication 1,3 -resFile $RESFILE \
    -seed 12345678 -sleep 100,1000 -writeSize 1,67108864

hadoop --config $HADOOP_CONF_DIR org.apache.hadoop.fs.slive.SliveTest \
    -appendSize 1,67108864 -append 0,uniform -baseDir /user/$USER/S-Live \
    -blockSize 67108864,67108864 -create 100,uniform -delete 0,uniform -dirSize 
16 \
    -duration 300 -files 1024 -ls 0,uniform -maps 20 -mkdir 0,uniform -ops 
1 \
    -packetSize 65536 -readSize 1,4294967295 -read 0,uniform -reduces 5 \
    -rename 0,uniform -replication 1,3 -resFile $RESFILE -seed 12345678 \
    -sleep 100,1000 -writeSize 1,67108864

Thanks,
-Eric Payne
 
--
From: Owen O'Malley [o...@hortonworks.com]
Sent: Thu 8/25/2011 7:12 PM
To: common-dev@hadoop.apache.org
Subject: [VOTE] Should we release 0.20.204.0-rc3?
 
 
All,
   I've fixed the issues that Allen observed in the previous rc for 0.20.204 
and rolled the new bundled up in 
http://people.apache.org/~omalley/hadoop-0.20.204.0-rc3. Please download the 
tarball, compile it,  and try it out. All of the tests pass, and I've run 
several 1TB sorts with 15,000 maps and 110 reduces with only 1 task failures 
out of 3 runs.
 
Thanks,
   Owen

[jira] [Created] (HADOOP-7598) smart-apply-patch.sh does not work on some older versions of BASH

2011-08-31 Thread Robert Joseph Evans (JIRA)
smart-apply-patch.sh does not work on some older versions of BASH
-

 Key: HADOOP-7598
 URL: https://issues.apache.org/jira/browse/HADOOP-7598
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0, 0.24.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 0.23.0, 0.24.0


I don't really know why, but on some versions of bash (including the one I use, 
and is on the build servers)
bash --version
GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.

The line
{code}
elif [[ $PREFIX_DIRS =~ 
^(hadoop-common-project|hadoop-hdfs-project|hadoop-mapreduce-project)$ ]]; then
{code}

evaluates to false but if the test is moved out of the elif statement then it 
works correctly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-08-31 Thread Eric Yang (JIRA)
Improve hadoop setup conf script to setup secure Hadoop cluster
---

 Key: HADOOP-7599
 URL: https://issues.apache.org/jira/browse/HADOOP-7599
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.20.203.0
 Environment: Java 6, RHEL 5.6
Reporter: Eric Yang
Assignee: Eric Yang
 Fix For: 0.20.205.0


Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
motivation of this jira is to provide setup scripts to automate setup secure 
Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7601) Move common fs implementations to a hadoop-fs module

2011-08-31 Thread Luke Lu (JIRA)
Move common fs implementations to a hadoop-fs module


 Key: HADOOP-7601
 URL: https://issues.apache.org/jira/browse/HADOOP-7601
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Luke Lu
 Fix For: 0.23.0


Much of the hadoop-common dependencies is from the fs implementations. We more 
fs implementations on the way (ceph, lafs etc). I propose that we move all the 
fs implementations to a hadoop-fs module under hadoop-common-project.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira