Re: Debugging 1.0.0 with jdb

2012-02-01 Thread Harsh J
I've not used jdb since I almost always use Eclipse to do this, but to
have ant run javac with debug info on, you'll need to find the
relevant javac elements and add the attribute 'debug=on' to them,
to have the javac produce files with debug info. You can then pick up
the generated test jar and run it via jdb?

On Mon, Jan 30, 2012 at 7:17 AM, Tim Broberg tim.brob...@exar.com wrote:
 I'd like to be able to step through unit tests with jdb to debug my classes.

 Is there a quick-and-easy way to rebuild with ant such that debug information 
 is included?

 Thanks,
    - Tim.

 The information and any attached documents contained in this message
 may be confidential and/or legally privileged.  The message is
 intended solely for the addressee(s).  If you are not the intended
 recipient, you are hereby notified that any use, dissemination, or
 reproduction is strictly prohibited and may be unlawful.  If you are
 not the intended recipient, please contact the sender immediately by
 return e-mail and destroy all copies of the original message.



-- 
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about


[jira] [Resolved] (HADOOP-8011) How to use distcp command betwen 2 cluster that different version

2012-02-01 Thread Brian Bloniarz (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Bloniarz resolved HADOOP-8011.


Resolution: Not A Problem

 How to use distcp command betwen 2 cluster that different version
 -

 Key: HADOOP-8011
 URL: https://issues.apache.org/jira/browse/HADOOP-8011
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: cldoltd

 I have tow cluster 1.0 and 0.2 
 how to use distcp to copy betwen 2 cluster
 this is error:
 Copy failed: java.io.IOException: Call to cluster1 failed on local exception: 
 java.io.EOFException
 at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
 at org.apache.hadoop.ipc.Client.call(Client.java:1071)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
 at $Proxy1.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
 at 
 org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
 at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:238)
 at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:203)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
 at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
 at org.apache.hadoop.tools.DistCp.checkSrcPath(DistCp.java:635)
 at org.apache.hadoop.tools.DistCp.copy(DistCp.java:656)
 at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
 Caused by: java.io.EOFException
 at java.io.DataInputStream.readInt(DataInputStream.java:375)
 at 
 org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:800)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:745)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: MPI: Java/JNI help

2012-02-01 Thread Ralph Castain
Thanks Kihwal! This was an excellent suggestion and worked great!

Should have the Java bindings out shortly...

On Jan 31, 2012, at 4:35 PM, Kihwal Lee wrote:

 There might be other tricks you can play with CL, but here is my idea: You 
 could have the initial jni native lib to become a sort of wrapper to dlopen() 
 the real thing (the one plug-ins depend on) with RTLD_GLOBAL, so that the 
 fact that the jni library is loaded in a specific name space does not matter.
 
 Kihwal
 
 On 1/31/12 4:34 PM, Ralph Castain r...@open-mpi.org wrote:
 
 I was able to dig further into this, and we believe we finally tracked this 
 down to root cause. It appears that java loads things into a private, as 
 opposed to global, namespace. Thus, the Java MPI bindings load the initial 
 libmpi just fine.
 
 However, when libmpi then attempts to load the individual plug-ins beneath 
 it, the load fails due to unfound symbols. Our plug-ins are implemented as 
 individual dll's, and reference symbols from within the larger libmpi above 
 them. In order to find those symbols, the libraries must be in the global 
 namespace.
 
 We have a workaround - namely, to disable dlopen so all the plug-ins get 
 pulled up into libmpi. However, this eliminates the ability for a vendor to 
 distribute a binary, proprietary plug-in that we absorb during dlopen. For 
 the moment, this isn't a big deal, but it could be an issue down the line. We 
 have some ideas on how to resolve it internally, but it would take a fair 
 amount of work, and have some side-effects.
 
 Does anyone know if it is possible to convince java to use the global 
 namespace? Or can you point me to someone/someplace where I should explore 
 the question?
 
 Thanks
 Ralph
 
 On Jan 30, 2012, at 5:13 PM, Kihwal Lee wrote:
 
 It doesn't have to be static.
 Do architectures match between the node manager jvm and the library?
 If one is 32 bit and the other is 64, it won't work.
 
 Kihwal
 
 On 1/30/12 5:58 PM, Ralph Castain r...@open-mpi.org wrote:
 
 Hi folks
 
 As per earlier emails, I'm just about ready to release the Java MPI 
 bindings. I have one remaining issue and would appreciate some help.
 
 We typically build OpenMPI dynamically. For the Java bindings, this means 
 that the JNI code underlying the Java binding must dynamically load OMPI 
 plug-ins. Everything works fine on Mac. However, on Linux, I am getting 
 dynamic library load errors.
 
 I have tried setting -Djava.library.path and LD_LIBRARY_PATH to the correct 
 locations. In both cases, I get errors from the JNI code indicating that it 
 was unable to open the specified dynamic library.
 
 I have heard from one person that JNI may need to be built statically, and I 
 suppose it is possible that Apple's customized Java implementation 
 specifically resolved that problem. However, all the online documentation I 
 can find indicates that Java on Linux should also be able to load dynamic 
 libraries - but JNI is not specifically addressed.
 
 Can any of you Java experts provide advice on this behavior? I'd like to get 
 these bindings released!
 
 Thanks
 Ralph
 
 
 
 



[jira] [Created] (HADOOP-8012) hadoop-daemon.sh and yarn-daemon.sh are trying to mkdir and chow log/pid dirs which can fail

2012-02-01 Thread Roman Shaposhnik (Created) (JIRA)
hadoop-daemon.sh and yarn-daemon.sh are trying to mkdir and chow log/pid dirs 
which can fail


 Key: HADOOP-8012
 URL: https://issues.apache.org/jira/browse/HADOOP-8012
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.0
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
Priority: Minor
 Fix For: 0.23.1


Here's what I see when using Hadoop in Bigtop:

{noformat}
$ sudo /sbin/service hadoop-hdfs-namenode start
Starting Hadoop namenode daemon (hadoop-namenode): chown: changing ownership of 
`/var/log/hadoop': Operation not permitted
starting namenode, logging to /var/log/hadoop/hadoop-hdfs-namenode-centos5.out
{noformat}

This is a cosmetic issue, but it would be nice to fix it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Fwd: question about branches

2012-02-01 Thread Hwa Zynn
Hi, all,

I have a newcomer question. I would like to browse HDFS source code for the
1.0 release, but the Git repository (git://git.apache.org/hadoop-hdfs.git)
for Hadoop-HDFS project doesn't have this branch, not even 0.20. I can see
a lot more branches from Hadoop-Common project (
git://git.apache.org/hadoop-common.git) including branch-1.0 .

Why is that? Does Hadoop-Common branch-1.0 include all source code for HDFS
and MapReduce projects?

Thanks,

Yong