Unsubscribe

2024-06-06 Thread Ram Kumar



Error when appending a file on HDFS

2016-06-17 Thread ram kumar
Hi,

I used *hdfs.ext.avro.**AvroWriter* to write an Avro file on HDFS as given
in,
http://hdfscli.readthedocs.io/en/latest/api.html#hdfs.ext.avro.AvroWriter


with AvroWriter(client, hdfs_file, append=True, codec="snappy") as writer:
>writer.write(data)
>

When I call above in a loop,
I get,

java.lang.Exception: Shell Process Exception: Python HdfsError raised
> Traceback (most recent call last):
>   File "Hdfsfile.py", line 49, in process
> writer.write(data)
>   File "/home/ram/lib/python2.7/site-packages/hdfs/ext/avro/__init__.py",
> line 277, in __exit__
> self._fo.__exit__(*exc_info)
>   File "/home/ram/lib/python2.7/site-packages/hdfs/util.py", line 99, in
> __exit__
> raise self._err # pylint: disable=raising-bad-type
> *HdfsError: Failed to APPEND_FILE /user/ram/level for
> DFSClient_NONMAPREDUCE_-1757292245_79 on 172.26.83.17 because this file
> lease is currently owned by DFSClient_NONMAPREDUCE_-668446345_78 on
> 172.26.83.17*
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2979)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2726)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3033)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3002)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:739)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:429)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
>


It seems like the resource is used by previous append
Is there a way to check whether the file exist, each time the append is
called?

Thanks,
Ram


unsubscribe

2016-02-26 Thread Ram



unsubscribe

2016-02-26 Thread Ram



unsubscribe

2016-02-26 Thread Ram



hadoop mapreduce job rest api

2015-12-23 Thread ram kumar
Hi,

I want to submit a mapreduce job using rest api,
and get the status of the job every n interval.
Is there a way to do it?

Thanks


Decommission datanode

2015-11-02 Thread ram kumar
Hi,

I don't have much data, but it took around 40 minutes to decommission.

How  long will it take to decommission a datanode?
Is there any way to optimize the process?

Thanks.


check decommission status

2015-10-28 Thread ram kumar
Hi,

Is there a java api to get decommission status for a particular data node?

Thanks.


Fw: new message

2015-10-06 Thread Ram
Hello!

 

New message, please read <http://millenniumgroups.co.in/speaking.php?xe>

 

Ram



Fw: new message

2015-10-06 Thread Ram
Hello!

 

New message, please read <http://www.amenestate.com/standing.php?vl7>

 

Ram



unsubscribe

2015-04-28 Thread Ram
unsubscribe


Unsubscribe

2015-04-09 Thread Ram



Re: Hadoop 2.6 issue

2015-04-01 Thread Ram Kumar
Anand,

Try Oracle JDK instead of Open JDK.

Regards,
Ramkumar Bashyam

On Wed, Apr 1, 2015 at 1:25 PM, Anand Murali anand_vi...@yahoo.com wrote:

 Tried export in hadoop-env.sh. Does not work either

 Anand Murali
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)



   On Wednesday, April 1, 2015 1:03 PM, Jianfeng (Jeff) Zhang 
 jzh...@hortonworks.com wrote:



  Try to export JAVA_HOME in hadoop-env.sh


  Best Regard,
 Jeff Zhang


   From: Anand Murali anand_vi...@yahoo.com
 Reply-To: user@hadoop.apache.org user@hadoop.apache.org, Anand Murali
 anand_vi...@yahoo.com
 Date: Wednesday, April 1, 2015 at 2:28 PM
 To: user@hadoop.apache.org user@hadoop.apache.org
 Subject: Hadoop 2.6 issue

Dear All:

  I am unable to start Hadoop even after setting HADOOP_INSTALL,JAVA_HOME
 and JAVA_PATH. Please find below error message

  anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ start-dfs.sh --config
 /home/anand_vihar/hadoop-2.6.0/conf
 Starting namenodes on [localhost]
 localhost: Error: JAVA_HOME is not set and could not be found.
 cat: /home/anand_vihar/hadoop-2.6.0/conf/slaves: No such file or directory
 Starting secondary namenodes [0.0.0.0]
 0.0.0.0: Error: JAVA_HOME is not set and could not be found.



 anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ echo
 * $JAVA_HOME /usr/lib/jvm/java-1.7.0-openjdk-amd64*
 anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ echo
 * $HADOOP_INSTALL /home/anand_vihar/hadoop-2.6.0*
 anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ echo* $PATH*

 :/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/anand_vihar/hadoop-2.6.0/bin:/home/anand_vihar/hadoop-2.6.0/sbin:/usr/lib/jvm/java-1.7.0-openjdk-amd64:/usr/lib/jvm/java-1.7.0-openjdk-amd64
 anand_vihar@Latitude-E5540:~/hadoop-2.6.0$

  I HAVE MADE NO CHANGES IN HADOOP_ENV.sh and run it succesfully.


  Core-site.xml
  ?xml version=1.0?
 !--core-site.xml--
 configuration
 property
 namefs.default.name/name
 valuehdfs://localhost//value
 /property
 /configuration

  HDFS-site.xml
  ?xml version=1.0?
 !-- hdfs-site.xml --
 configuration
 property
 namedfs.replication/name
 value1/value
 /property
 /configuration

  Mapred-site.xml
 ?xml version=1.0?
 !--mapred-site.xml--
 configuration
 property
 namemapred.job.tracker/name
 valuelocalhost:8021/value
 /property
 /configuration

  Shall be thankful, if somebody can advise.

  Regards,


  Anand Murali
 11/7, 'Anand Vihar', Kandasamy St, Mylapore
 Chennai - 600 004, India
 Ph: (044)- 28474593/ 43526162 (voicemail)





Re: changing log verbosity

2015-02-24 Thread Ram Kumar
Hi Jonathan,

For Audit Log you can look log4.properties file. By default, the
log4j.properties file has the log threshold set to WARN. By setting this
level to INFO, audit logging can be turned on. The following snippet shows
the log4j.properties configuration when HDFS and MapReduce audit logs
are turned on.

#
# hdfs audit logging
#
hdfs.audit.logger=INFO,NullAppender
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p%c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}

#
# mapred audit logging
#
mapred.audit.logger=INFO,NullAppender
mapred.audit.log.maxfilesize=256MB
mapred.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p%c{2}: %m%n
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}

Regards,
Ramkumar Bashyam


On Tue, Feb 24, 2015 at 2:36 PM, Jonathan Aquilina jaquil...@eagleeyet.net
wrote:

  How does one go about changing the log verbosity in hadoop? What
 configuration file should I be looking at?



 --
 Regards,
 Jonathan Aquilina
 Founder Eagle Eye T




Re: unsubscribe

2015-02-22 Thread Ram Kumar
Check http://hadoop.apache.org/mailing_lists.html#User

Regards,
Ramkumar Bashyam

On Sun, Feb 22, 2015 at 1:48 PM, Mainak Bandyopadhyay 
mainak.bandyopadh...@gmail.com wrote:

 unsubscribe.





Re: unscubscribe

2015-02-22 Thread Ram Kumar
Check http://hadoop.apache.org/mailing_lists.html#User

Regards,
Ramkumar Bashyam

On Mon, Feb 23, 2015 at 12:29 AM, Umesh Reddy ur2...@yahoo.com wrote:

 unsubscribe



Re: unsubscribe

2015-02-03 Thread Ram Kumar
Check http://hadoop.apache.org/mailing_lists.html#User

Regards,
Ramkumar Bashyam

On Wed, Jan 7, 2015 at 7:01 PM, Kiran Prasad Gorigay 
kiranprasa...@imimobile.com wrote:

unsubscribe






Re: unsubscribe me

2014-12-03 Thread Ram Kumar
Email to user-unsubscr...@hadoop.apache.org to unsubscribe.

Regards,
Ramkumar Bashyam

On Wed, Dec 3, 2014 at 4:43 PM, chandu banavaram chandu.banava...@gmail.com
 wrote:

 please unsubscribe me



Re: sqoop oracle connection error

2013-09-04 Thread Ram
Hi Ravi,
   Thanks for post. The problem is listener and privileges and Database
name, username and tablename are in CAPITAL Letters. here is the query.

sqoop import --connect
jdbc:oracle:thin:@//ramesh.ops.cloudwick.com/CLOUD--username RAMESH
--password password --table TEST -m 1


Here is the output.

[root@ramesh bin]# sqoop import --connect jdbc:oracle:thin:@//
ramesh.ops.cloudwick.com/CLOUD --username RAMESH --password password
--table TEST -m 1
13/09/05 10:34:20 WARN tool.BaseSqoopTool: Setting your password on the
command-line is insecure. Consider using -P instead.

13/09/05 10:34:21 INFO manager.SqlManager: Using default fetchSize of 1000
13/09/05 10:34:21 INFO tool.CodeGenTool: Beginning code generation
13/09/05 10:34:22 INFO manager.OracleManager: Time zone has been set to GMT
13/09/05 10:34:22 INFO manager.SqlManager: Executing SQL statement: SELECT
t.* FROM TEST t WHERE 1=0
13/09/05 10:34:22 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is
/usr/lib/hadoop
13/09/05 10:34:22 INFO orm.CompilationManager: Found hadoop core jar at:
/usr/lib/hadoop/hadoop-core.jar
Note: /tmp/sqoop-root/compile/2633ca54b23921416d40e2bdd5141abb/TEST.java
uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/09/05 10:34:24 INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-root/compile/2633ca54b23921416d40e2bdd5141abb/TEST.jar
13/09/05 10:34:24 INFO manager.OracleManager: Time zone has been set to GMT
13/09/05 10:34:24 INFO manager.OracleManager: Time zone has been set to GMT
13/09/05 10:34:24 INFO mapreduce.ImportJobBase: Beginning import of TEST
13/09/05 10:34:25 INFO manager.OracleManager: Time zone has been set to GMT
13/09/05 10:34:29 INFO mapred.JobClient: Running job: job_201309051031_0001
13/09/05 10:34:30 INFO mapred.JobClient:  map 0% reduce 0%
13/09/05 10:34:44 INFO mapred.JobClient:  map 100% reduce 0%
13/09/05 10:34:46 INFO mapred.JobClient: Job complete: job_201309051031_0001
13/09/05 10:34:46 INFO mapred.JobClient: Counters: 18
13/09/05 10:34:46 INFO mapred.JobClient:   Job Counters
13/09/05 10:34:46 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12771
13/09/05 10:34:46 INFO mapred.JobClient: Total time spent by all
reduces waiting after reserving slots (ms)=0
13/09/05 10:34:46 INFO mapred.JobClient: Total time spent by all maps
waiting after reserving slots (ms)=0
13/09/05 10:34:46 INFO mapred.JobClient: Launched map tasks=1
13/09/05 10:34:46 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/09/05 10:34:46 INFO mapred.JobClient:   File Output Format Counters
13/09/05 10:34:46 INFO mapred.JobClient: Bytes Written=24
13/09/05 10:34:46 INFO mapred.JobClient:   FileSystemCounters
13/09/05 10:34:46 INFO mapred.JobClient: HDFS_BYTES_READ=87
13/09/05 10:34:46 INFO mapred.JobClient: FILE_BYTES_WRITTEN=58070
13/09/05 10:34:46 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=24
13/09/05 10:34:46 INFO mapred.JobClient:   File Input Format Counters
13/09/05 10:34:46 INFO mapred.JobClient: Bytes Read=0
13/09/05 10:34:46 INFO mapred.JobClient:   Map-Reduce Framework
13/09/05 10:34:46 INFO mapred.JobClient: Map input records=6
13/09/05 10:34:46 INFO mapred.JobClient: Physical memory (bytes)
snapshot=117080064
13/09/05 10:34:46 INFO mapred.JobClient: Spilled Records=0
13/09/05 10:34:46 INFO mapred.JobClient: CPU time spent (ms)=2320
13/09/05 10:34:46 INFO mapred.JobClient: Total committed heap usage
(bytes)=105775104
13/09/05 10:34:46 INFO mapred.JobClient: Virtual memory (bytes)
snapshot=861552640
13/09/05 10:34:46 INFO mapred.JobClient: Map output records=6
13/09/05 10:34:46 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
13/09/05 10:34:46 INFO mapreduce.ImportJobBase: Transferred 24 bytes in
21.2419 seconds (1.1298 bytes/sec)
13/09/05 10:34:46 INFO mapreduce.ImportJobBase: Retrieved 6 records.
[root@ramesh bin]#


Hi,



From,
Ramesh.




On Sat, Aug 31, 2013 at 2:45 PM, Ravi Kiran ravikiranmag...@gmail.comwrote:

 Hi ,
Can you check if you are able to ping or telnet to the ip address and
 port of Oracle database from your machine.  I have a hunch that Oracle
 Listener is stopped . If so , start it.
 The commands to check the status and start if the listener isn't running.

 $ lsnrctl status
 $ lsnrctl start

 Regards

 Ravi Magham


 On Sat, Aug 31, 2013 at 2:05 PM, Krishnan Narayanan 
 krishnan.sm...@gmail.com wrote:

 Hi Ram,

 I get the same error.If you find an answer pls dp fwd it to me. I will do
 the same.

 Thx
 Krish


 On Sat, Aug 31, 2013 at 12:00 AM, Ram pramesh...@gmail.com wrote:


 Hi,
I am trying to import table from oracle hdfs. i am getting the
 following error

 ERROR manager.SqlManager: Error executing statement:
 java.sql.SQLRecoverableException: IO Error: The Network Adapter could not
 establish the connection
 java.sql.SQLRecoverableException: IO Error: The Network Adapter could
 not establish the connection

 any work around this.

 the query is:

 sqoop import --connect

sqoop oracle connection error

2013-08-31 Thread Ram
Hi,
   I am trying to import table from oracle hdfs. i am getting the following
error

ERROR manager.SqlManager: Error executing statement:
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not
establish the connection
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not
establish the connection

any work around this.

the query is:

sqoop import --connect
jdbc:oracle:thin:@//ramesh.ops.cloudwick.com/cloud--username ramesh
--password password --table cloud.test -m 1

the output is as follows;

[root@ramesh ram]# sqoop import --connect jdbc:oracle:thin:@//
ramesh.ops.cloudwick.com/cloud --username ramesh --password password
--table cloud.test -m 1
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
13/08/31 12:27:27 WARN tool.BaseSqoopTool: Setting your password on the
command-line is insecure. Consider using -P instead.
13/08/31 12:27:27 INFO manager.SqlManager: Using default fetchSize of 1000
13/08/31 12:27:27 INFO tool.CodeGenTool: Beginning code generation
13/08/31 12:27:27 ERROR manager.SqlManager: Error executing statement:
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not
establish the connection
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not
establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:458)
at oracle.jdbc.driver.PhysicalConnection.init(PhysicalConnection.java:546)
at oracle.jdbc.driver.T4CConnection.init(T4CConnection.java:236)
at
oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at
org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:313)
at
org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:605)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:628)
at
org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:235)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:219)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:347)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1255)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1072)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:82)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:390)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
Caused by: oracle.net.ns.NetException: The Network Adapter could not
establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:392)
at
oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:434)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:687)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:247)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1102)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:320)
... 24 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:150)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:370)
... 29 more
13/08/31 12:27:27 ERROR manager.OracleManager: Failed to rollback
transaction
java.lang.NullPointerException
at
org.apache.sqoop.manager.OracleManager.getColumnNames(OracleManager.java:744)
at org.apache.sqoop.orm.ClassWriter.getColumnNames(ClassWriter.java:1222)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1074)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:82)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:390)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229

impala

2013-08-26 Thread Ram
Hi,
Any one can suggest the following.

how exactly Impala works?  What happens when you submit a query?  How the
data will be transferred to different nodes?


From,
Ramesh.


hadoop-nagios integration

2013-07-24 Thread Ram
Hi,
I had installed nagios, and hadoop 2.0.0. I ant integrate hadoop
services, hosts and hadoop parameters like total HDFS storage, how much
HDFS storage available and datanodes up and running.to get alerts.

   any one work around.


Thanks,
Ramesh.


Re: copy files from ftp to hdfs in parallel, distcp failed

2013-07-16 Thread Ram
Hi,
Please replace 0.0.0.0.with your ftp host ip address and try it.

Hi,



From,
Ramesh.




On Mon, Jul 15, 2013 at 3:22 PM, Hao Ren h@claravista.fr wrote:

  Thank you, Ram

 I have configured core-site.xml as following:

 ?xml version=1.0?
 ?xml-stylesheet type=text/xsl href=configuration.xsl?

 !-- Put site-specific property overrides in this file. --

 configuration

 property
 namehadoop.tmp.dir/name
 value/vol/persistent-hdfs/value
 /property

 property
 namefs.default.name/name
 valuehdfs://ec2-23-23-33-234.compute-1.amazonaws.com:9010
 /value
 /property

 property
 nameio.file.buffer.size/name
 value65536/value
 /property

 property
 namefs.ftp.host/name
 value0.0.0.0/value
 /property

 property
 namefs.ftp.host.port/name
 value21/value
 /property

 /configuration

 Then I tried  hadoop fs -ls file:/// , it works.
 But hadoop fs -ls ftp://login:password@ftp server ip/directory/
 doesn't work as usual:
 ls: Cannot access ftp://user:password@ftp server
 ip/directory/: No such file or directory.

 When ignoring directroy as :

 hadoop fs -ls ftp://login:password@ftp server ip/

 There are no error msgs, but it lists nothing.


 I have also check the rights for my /home/user directroy:

 drwxr-xr-x 11 user user  4096 jui 11 16:30 user

 and all the files under /home/user have rights 755.

 I can easily copy the link ftp://user:password@ftp server
 ip/directory/ to firefox, it lists all the files as expected.

 Any workaround here ?

 Thank you.

 Le 12/07/2013 14:01, Ram a écrit :

 Please configure the following in core-ste.xml and try.
Use hadoop fs -ls file:///  -- to display local file system files
Use hadoop fs -ls ftp://your ftp location   -- to display ftp files
 if it is listing files go for distcp.

  reference from
 http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml


fs.ftp.host 0.0.0.0 FTP filesystem connects to this server
 fs.ftp.host.port 21 FTP filesystem connects to fs.ftp.host on this port



 --
 Hao Ren
 ClaraVistawww.claravista.fr




Re: Staging directory ENOTDIR error.

2013-07-12 Thread Ram
Hi jay,
what hadoop command you are given.

Hi,



From,
Ramesh.




On Fri, Jul 12, 2013 at 7:54 AM, Devaraj k devara...@huawei.com wrote:

  Hi Jay,

 ** **

Here client is trying to create a staging directory in local file
 system,  which actually should create in HDFS.

 ** **

 Could you check whether do you have configured “fs.defaultFS”
 configuration in client with the HDFS.

 

 ** **

 Thanks

 Devaraj k

 ** **

 *From:* Jay Vyas [mailto:jayunit...@gmail.com]
 *Sent:* 12 July 2013 04:12
 *To:* common-u...@hadoop.apache.org
 *Subject:* Staging directory ENOTDIR error.

 ** **

 Hi , I'm getting an ungoogleable exception, never seen this before. 

 This is on a hadoop 1.1. cluster... It appears that its permissions
 related... 

 Any thoughts as to how this could crop up?

 I assume its a bug in my filesystem, but not sure.


 13/07/11 18:39:43 ERROR security.UserGroupInformation:
 PriviledgedActionException as:root cause:ENOTDIR: Not a directory
 ENOTDIR: Not a directory
 at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
 at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699)
 at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
 at
 org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
 at
 org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)

 


 --
 Jay Vyas
 http://jayunit100.blogspot.com 



Re: copy files from ftp to hdfs in parallel, distcp failed

2013-07-12 Thread Ram
Hi,
   Please configure the following in core-ste.xml and try.
   Use hadoop fs -ls file:///  -- to display local file system files
   Use hadoop fs -ls ftp://your ftp location   -- to display ftp files if
it is listing files go for distcp.

reference from
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml


fs.ftp.host0.0.0.0FTP filesystem connects to this serverfs.ftp.host.port21FTP
filesystem connects to fs.ftp.host on this port
and try to set the property also

reference from hadoop definitive guide hadoop file system.

Filesystem URI scheme Java implementation
Description
  (all under org.apache.hadoop)

FTP ftp fs.ftp.FTPFileSystem
A filesystem backed by an FTP server.


Hi,



From,
Ramesh.




On Fri, Jul 12, 2013 at 1:04 PM, Hao Ren h@claravista.fr wrote:

 Le 11/07/2013 20:47, Balaji Narayanan (பாலாஜி நாராயணன்) a écrit :

 multiple copy jobs to hdfs


 Thank you for your reply and the link.

 I read the link before, but I didn't find any examples about copying file
 from ftp to hdfs.

 There are about 20-40 file in my directory. I just want to move or copy
 that directory to hdfs on Amazon EC2.

 Actually, I am new to hadoop. I would like to know how to do multiple copy
 jobs to hdfs without distcp.

 Thank you again.


 --
 Hao Ren
 ClaraVista
 www.claravista.fr



Re: Taktracker in namenode failure

2013-07-12 Thread Ram
Hi,
The problem is with jar file only, to check run any other MR job or
sample wordcount job on namenode tasktracker, if it is running no problem
with namenode tasktracker, if not running there may be problem with
tasktracker configuration, then compare with other node tasktracker
configuration. i.e tasktracker configuration means mapred configuration.

Hi,



From,
Ramesh.




On Fri, Jul 12, 2013 at 3:37 PM, Devaraj k devara...@huawei.com wrote:

  I think, there is mismatch of jar’s coming in the classpath for the map
 tasks when it runs in different machines. You can find out this, by giving
 some unique name for your Mapper class, Job Submit class and then submit
 the Job.

 ** **

 Thanks

 Devaraj k

 ** **

 *From:* Ramya S [mailto:ram...@suntecgroup.com]
 *Sent:* 12 July 2013 15:27
 *To:* user@hadoop.apache.org
 *Subject:* RE: Taktracker in namenode failure

  ** **

 Both the map output value  class configured and the output value  written
 from the mapper is Text class. So there is no mismatch in the value class.
 

  

  But when the same MR program is run with 2 tasktrackers(without
 tasktracker in namenode) exception is not occuring.

  

 The problem is only with the tasktracker running in the namenode.

  

  

  

 *Thanks  Regards*

  

 *Ramya.S*

 ** **
  --

 *From:* Devaraj k [mailto:devara...@huawei.com devara...@huawei.com]
 *Sent:* Fri 7/12/2013 3:04 PM
 *To:* user@hadoop.apache.org
 *Subject:* RE: Taktracker in namenode failure

 Could you tell, what is the Map Output Value class you are configuring
 while submitting Job and what is the type of the value writing from the
 Mapper. If both of these mismatches then it will trow the below error.

  

 Thanks

 Devaraj k

  

 *From:* Ramya S [mailto:ram...@suntecgroup.com ram...@suntecgroup.com]
 *Sent:* 12 July 2013 14:46
 *To:* user@hadoop.apache.org
 *Subject:* Taktracker in namenode failure

  

 Hi,

  

 Why only tasktracker in namenode faill during  job execution with error.**
 **

 I have attached the snapshot of error screen with this mail

 java.io.IOException: Type mismatch in value from map: expected 
 org.apache.hadoop.io.Text, recieved org.apache.hadoop.io.IntWritable

 at 
 org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1019)

 at 
 org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)

 at 
 org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)

 at WordCount$TokenizerMapper.map(WordCount.java:30)

 at WordCount$TokenizerMapper.map(WordCount.java:19)

 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)

 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)

 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)

 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)

 at java.security.AccessController.doPrivileged(Native Method)

 at javax.security.auth.Subject.doAs(Subject.java:416)

 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

 at org.apache.hadoop.mapred.Child.main(Child.java:249)

  

 but  this same task is reassigned to another tasktracker and getting 
 executed. why?

   

  

 *Best Regards,*

 *Ramya*



Re: Cloudera links and Document

2013-07-11 Thread Ram
Hi,
Go through the links.

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Managing-Clusters/cmmc_CM_architecture.html

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Managing-Clusters/cmmc_CM_architecture.html

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_installing_configuring_dbs.html


Hi,



From,
Ramesh.




On Thu, Jul 11, 2013 at 6:58 PM, Sathish Kumar sa848...@gmail.com wrote:

 Hi All,

 Can anyone help me the link or document that explain the below.

 How Cloudera Manager works and handle the clusters (Agent and Master
 Server)?
 How the Cloudera Manager Process Flow works?
 Where can I locate Cloudera configuration files and explanation in brief?


 Regards
 Sathish




Re: Issues Running Hadoop 1.1.2 on multi-node cluster

2013-07-10 Thread Ram
Hi,
   Please check all directories/files are existed in local system
configured mapres-site.xml and permissions to the files/directories as
mapred as user and hadoop as a group.

Hi,



From,
P.Ramesh Babu,
+91-7893442722.



On Wed, Jul 10, 2013 at 9:36 PM, Leonid Fedotov lfedo...@hortonworks.comwrote:

 Make sure your mapred.local.dir (check it in mapred-site.xml) is actually
 exists and writable by your mapreduce usewr.

 *Thank you!*
 *
 *
 *Sincerely,*
 *Leonid Fedotov*


 On Jul 9, 2013, at 6:09 PM, Kiran Dangeti wrote:

 Hi Siddharth,

 While running the multi-node we need to take care of the local host of the
 slave machine from the error messages the task tracker root directory not
 able to get to the masters. Please check and rerun it.

 Thanks,
 Kiran


 On Tue, Jul 9, 2013 at 10:26 PM, siddharth mathur sidh1...@gmail.comwrote:

 Hi,

 I have installed Hadoop 1.1.2 on a 5 nodes cluster. I installed it
 watching this tutorial *
 http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
 *

 When I startup the hadoop, I get the folloing error in *all* the
 tasktrackers.

 
 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner:
 Adding job_201307051203_0001 for user-log deletion with
 retainTimeStamp:1373472921775
 2013-07-09 12:15:22,301 INFO org.apache.hadoop.mapred.UserLogCleaner:
 Adding job_201307051611_0001 for user-log deletion with
 retainTimeStamp:1373472921775
 2013-07-09 12:15:22,601 INFO org.apache.hadoop.mapred.TaskTracker:*Failed to 
 get system directory
 *...
 2013-07-09 12:15:25,164 INFO org.apache.hadoop.mapred.TaskTracker: Failed
 to get system directory...
 2013-07-09 12:15:27,901 INFO org.apache.hadoop.mapred.TaskTracker: Failed
 to get system directory...
 2013-07-09 12:15:30,144 INFO org.apache.hadoop.mapred.TaskTracker: Failed
 to get system directory...
 

 *But everything looks fine in the webUI. *

 When I run a job, I get the following error but the job completes
 anyways. I have* attached the* *screenshots* of the maptask failed error
 log in the UI.

 **
 13/07/09 12:29:37 INFO input.FileInputFormat: Total input paths to
 process : 2
 13/07/09 12:29:37 INFO util.NativeCodeLoader: Loaded the native-hadoop
 library
 13/07/09 12:29:37 WARN snappy.LoadSnappy: Snappy native library not loaded
 13/07/09 12:29:37 INFO mapred.JobClient: Running job:
 job_201307091215_0001
 13/07/09 12:29:38 INFO mapred.JobClient:  map 0% reduce 0%
 13/07/09 12:29:41 INFO mapred.JobClient: Task Id :
 attempt_201307091215_0001_m_01_0, Status : FAILED
 Error initializing attempt_201307091215_0001_m_01_0:
 ENOENT: No such file or directory
 at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
 at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699)
 at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
 at
 org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240)
 at
 org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:205)
 at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1331)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
 at
 org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1306)
 at
 org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1221)
 at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2581)
 at java.lang.Thread.run(Thread.java:724)

 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task
 outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stdout
 13/07/09 12:29:41 WARN mapred.JobClient: Error reading task
 outputhttp://dmkd-1:50060/tasklog?plaintext=trueattemptid=attempt_201307091215_0001_m_01_0filter=stderr
 13/07/09 12:29:45 INFO mapred.JobClient:  map 50% reduce 0%
 13/07/09 12:29:53 INFO mapred.JobClient:  map 50% reduce 16%
 13/07/09 12:30:38 INFO mapred.JobClient: Task Id :
 attempt_201307091215_0001_m_00_1, Status : FAILED
 Error initializing attempt_201307091215_0001_m_00_1:
 ENOENT: No such file or directory
 at org.apache.hadoop.io.nativeio.NativeIO.chmod(Native Method)
 at org.apache.hadoop.fs.FileUtil.execSetPermission(FileUtil.java:699)
 at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:654)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
 at
 org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:240)
 at
 

How to configure Hive metastore (Mysql) for beeswax(Hive UI) in Clouera Manager

2013-07-09 Thread Ram
Hi,
I am using Cloudera Manager 4.1.2 not having hive as a service, so  I
was installed hive and configured mysql as metastore.  Using Cloudera
Manager i was installed HUE. In the Hue, Beeswax (Hive UI) which is using
by default derby database i want configure metastore same as what hive is
using i.e Mysql  and want to refer both hive and Beeswax will refer same
database and metastore.

I was changed the hive-site.xml file
in /var/run/cloudera-scm-agent/process/662-hue-HUE_SERVER/hive-conf
and /var/run/cloudera-scm-agent/process/663-hue-BEESWAX_SERVER/hive-conf
but beeswax is not pointing to metastore (mysql) and restarting hue service
every time creating new configuration file by cloudera manager.

Any suggestions where to do configuration changes. Thanks in advance.

From,
Ramesh Babu,


Running Distributed shell in hadoop0.23

2011-12-14 Thread sri ram
Hi,
 Can anyone give the procedure about how to run Distibuted shell
example in hadoop yarn.So that i try to understand how applicatin master
really works.


Running Distributed shell in hadoop0.23

2011-12-14 Thread sri ram
Hi,
 Can anyone give the procedure about how to run Distibuted shell
example in hadoop yarn.So that i try to understand how applicatin master
really works.


Error while starting datanode in hadoop 0.23 in secure mode

2011-12-13 Thread sri ram
Hi,
 I receive the following error while starting datanode in secure
mode of hadoop 0.23

2011-12-14 14:35:48,468 INFO  http.HttpServer
(HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer$
2011-12-14 14:35:48,471 WARN  lib.StaticUserWebFilter
(StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi should
not be used. Instead, use had$
2011-12-14 14:35:48,472 INFO  http.HttpServer
(HttpServer.java:addFilter(454)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUse$
2011-12-14 14:35:48,472 INFO  http.HttpServer
(HttpServer.java:addFilter(461)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUse$
2011-12-14 14:35:48,473 INFO  http.HttpServer
(HttpServer.java:addFilter(461)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUse$
jetty.ssl.password : jetty.ssl.keypassword : 2011-12-14 14:35:53,553 INFO
mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2011-12-14 14:35:54,044 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Started SelectChannelConnector@master:1006
2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
java.lang.NullPointerException
2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) - failed
Krb5AndCertsSslSocketConnector@0.0.0.0:1005: java.io.IOException:
!JsseListener:$
2011-12-14 14:35:54,048 WARN  mortbay.log (Slf4jLog.java:warn(76)) - failed
Server@1867df9: java.io.IOException: !JsseListener:
java.lang.NullPointerExcepti$
2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped Krb5AndCertsSslSocketConnector@0.0.0.0:1005
2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@master:1006
2011-12-14 14:35:54,189 INFO  datanode.DataNode
(DataNode.java:shutdown(1741)) - Waiting for threadgroup to exit, active
threads is 0
2011-12-14 14:35:54,190 ERROR datanode.DataNode
(DataNode.java:secureMain(2371)) - Exception in secureMain
java.io.IOException: !JsseListener: java.lang.NullPointerException
at
org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:516)
at
org.apache.hadoop.security.Krb5AndCertsSslSocketConnector.newServerSocket(Krb5AndCertsSslSocketConnector.java:123)
at
org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
at
org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:283)
at
org.mortbay.jetty.bio.SocketConnector.doStart(SocketConnector.java:147)

Is there any way to resolve this???


Error while starting datanode in hadoop 0.23 in secure mode

2011-12-13 Thread sri ram
Hi,
 I receive the following error while starting datanode in secure
mode of hadoop 0.23

2011-12-14 14:35:48,468 INFO  http.HttpServer
(HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer$
2011-12-14 14:35:48,471 WARN  lib.StaticUserWebFilter
(StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi should
not be used. Instead, use had$
2011-12-14 14:35:48,472 INFO  http.HttpServer
(HttpServer.java:addFilter(454)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUse$
2011-12-14 14:35:48,472 INFO  http.HttpServer
(HttpServer.java:addFilter(461)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUse$
2011-12-14 14:35:48,473 INFO  http.HttpServer
(HttpServer.java:addFilter(461)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUse$
jetty.ssl.password : jetty.ssl.keypassword : 2011-12-14 14:35:53,553 INFO
mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2011-12-14 14:35:54,044 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Started SelectChannelConnector@master:1006
2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
java.lang.NullPointerException
2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) - failed
Krb5AndCertsSslSocketConnector@0.0.0.0:1005: java.io.IOException:
!JsseListener:$
2011-12-14 14:35:54,048 WARN  mortbay.log (Slf4jLog.java:warn(76)) - failed
Server@1867df9: java.io.IOException: !JsseListener:
java.lang.NullPointerExcepti$
2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped Krb5AndCertsSslSocketConnector@0.0.0.0:1005
2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@master:1006
2011-12-14 14:35:54,189 INFO  datanode.DataNode
(DataNode.java:shutdown(1741)) - Waiting for threadgroup to exit, active
threads is 0
2011-12-14 14:35:54,190 ERROR datanode.DataNode
(DataNode.java:secureMain(2371)) - Exception in secureMain
java.io.IOException: !JsseListener: java.lang.NullPointerException
at
org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:516)
at
org.apache.hadoop.security.Krb5AndCertsSslSocketConnector.newServerSocket(Krb5AndCertsSslSocketConnector.java:123)
at
org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
at
org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:283)
at
org.mortbay.jetty.bio.SocketConnector.doStart(SocketConnector.java:147)

Is there any way to resolve this???


Re: Error while starting datanode in hadoop 0.23 in secure mode

2011-12-13 Thread sri ram
Thanks for the reply,
   I have tried with the ip of the individual systems also.But
the same eroor reoccurs

On Tue, Dec 13, 2011 at 3:40 PM, alo alt wget.n...@googlemail.com wrote:

 Hi,

 2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
 failed Krb5AndCertsSslSocketConnector@0.0.0.0:1005:
 java.io.IOException: !JsseListener:$
 2011-12-14 14:35:54,048 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
 failed Server@1867df9: java.io.IOException: !JsseListener:
 java.lang.NullPointerExcepti$
 2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
 Stopped Krb5AndCertsSslSocketConnector@0.0.0.0:1005

 0.0.0.0 as Kerberos - IP can't work.

 - Alex

 On Tue, Dec 13, 2011 at 10:27 AM, sri ram rsriram...@gmail.com wrote:
  Hi,
   I receive the following error while starting datanode in secure
  mode of hadoop 0.23
 
  2011-12-14 14:35:48,468 INFO  http.HttpServer
  (HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
  (class=org.apache.hadoop.http.HttpServer$
  2011-12-14 14:35:48,471 WARN  lib.StaticUserWebFilter
  (StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi should
 not
  be used. Instead, use had$
  2011-12-14 14:35:48,472 INFO  http.HttpServer
  (HttpServer.java:addFilter(454)) - Added filter static_user_filter
  (class=org.apache.hadoop.http.lib.StaticUse$
  2011-12-14 14:35:48,472 INFO  http.HttpServer
  (HttpServer.java:addFilter(461)) - Added filter static_user_filter
  (class=org.apache.hadoop.http.lib.StaticUse$
  2011-12-14 14:35:48,473 INFO  http.HttpServer
  (HttpServer.java:addFilter(461)) - Added filter static_user_filter
  (class=org.apache.hadoop.http.lib.StaticUse$
  jetty.ssl.password : jetty.ssl.keypassword : 2011-12-14 14:35:53,553 INFO
  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
  2011-12-14 14:35:54,044 INFO  mortbay.log (Slf4jLog.java:info(67)) -
 Started
  SelectChannelConnector@master:1006
  2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
  java.lang.NullPointerException
  2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
 failed
  Krb5AndCertsSslSocketConnector@0.0.0.0:1005: java.io.IOException:
  !JsseListener:$
  2011-12-14 14:35:54,048 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
 failed
  Server@1867df9: java.io.IOException: !JsseListener:
  java.lang.NullPointerExcepti$
  2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
 Stopped
  Krb5AndCertsSslSocketConnector@0.0.0.0:1005
  2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
 Stopped
  SelectChannelConnector@master:1006
  2011-12-14 14:35:54,189 INFO  datanode.DataNode
  (DataNode.java:shutdown(1741)) - Waiting for threadgroup to exit, active
  threads is 0
  2011-12-14 14:35:54,190 ERROR datanode.DataNode
  (DataNode.java:secureMain(2371)) - Exception in secureMain
  java.io.IOException: !JsseListener: java.lang.NullPointerException
  at
 
 org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:516)
  at
 
 org.apache.hadoop.security.Krb5AndCertsSslSocketConnector.newServerSocket(Krb5AndCertsSslSocketConnector.java:123)
  at
  org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
  at
  org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:283)
  at
  org.mortbay.jetty.bio.SocketConnector.doStart(SocketConnector.java:147)
 
  Is there any way to resolve this???
 



 --
 Alexander Lorenz
 http://mapredit.blogspot.com

 P Think of the environment: please don't print this email unless you
 really need to.



Re: Error while starting datanode in hadoop 0.23 in secure mode

2011-12-13 Thread sri ram
telnet master ip says unable to connect to remote host.
This is my following property for datanode in secure mode
property
namedfs.datanode.https.address/name
valuemaster:1005/value
/property


On Tue, Dec 13, 2011 at 4:18 PM, alo alt wget.n...@googlemail.com wrote:

 Check if kerberos respond:

 telnet KERBEROS_IP 1005

 As I know use kerberos per default ports 88, AFS token 746 (), kx509
 9878

 - Alex


 On Tue, Dec 13, 2011 at 11:39 AM, sri ram rsriram...@gmail.com wrote:
  Thanks for the reply,
 I have tried with the ip of the individual systems
 also.But
  the same eroor reoccurs
 
 
  On Tue, Dec 13, 2011 at 3:40 PM, alo alt wget.n...@googlemail.com
 wrote:
 
  Hi,
 
  2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
  failed Krb5AndCertsSslSocketConnector@0.0.0.0:1005:
  java.io.IOException: !JsseListener:$
  2011-12-14 14:35:54,048 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
  failed Server@1867df9: java.io.IOException: !JsseListener:
  java.lang.NullPointerExcepti$
  2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
  Stopped Krb5AndCertsSslSocketConnector@0.0.0.0:1005
 
  0.0.0.0 as Kerberos - IP can't work.
 
  - Alex
 
  On Tue, Dec 13, 2011 at 10:27 AM, sri ram rsriram...@gmail.com wrote:
   Hi,
I receive the following error while starting datanode in
 secure
   mode of hadoop 0.23
  
   2011-12-14 14:35:48,468 INFO  http.HttpServer
   (HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
   (class=org.apache.hadoop.http.HttpServer$
   2011-12-14 14:35:48,471 WARN  lib.StaticUserWebFilter
   (StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi
 should
   not
   be used. Instead, use had$
   2011-12-14 14:35:48,472 INFO  http.HttpServer
   (HttpServer.java:addFilter(454)) - Added filter static_user_filter
   (class=org.apache.hadoop.http.lib.StaticUse$
   2011-12-14 14:35:48,472 INFO  http.HttpServer
   (HttpServer.java:addFilter(461)) - Added filter static_user_filter
   (class=org.apache.hadoop.http.lib.StaticUse$
   2011-12-14 14:35:48,473 INFO  http.HttpServer
   (HttpServer.java:addFilter(461)) - Added filter static_user_filter
   (class=org.apache.hadoop.http.lib.StaticUse$
   jetty.ssl.password : jetty.ssl.keypassword : 2011-12-14 14:35:53,553
   INFO
   mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
   2011-12-14 14:35:54,044 INFO  mortbay.log (Slf4jLog.java:info(67)) -
   Started
   SelectChannelConnector@master:1006
   2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
   java.lang.NullPointerException
   2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
   failed
   Krb5AndCertsSslSocketConnector@0.0.0.0:1005: java.io.IOException:
   !JsseListener:$
   2011-12-14 14:35:54,048 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
   failed
   Server@1867df9: java.io.IOException: !JsseListener:
   java.lang.NullPointerExcepti$
   2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
   Stopped
   Krb5AndCertsSslSocketConnector@0.0.0.0:1005
   2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
   Stopped
   SelectChannelConnector@master:1006
   2011-12-14 14:35:54,189 INFO  datanode.DataNode
   (DataNode.java:shutdown(1741)) - Waiting for threadgroup to exit,
 active
   threads is 0
   2011-12-14 14:35:54,190 ERROR datanode.DataNode
   (DataNode.java:secureMain(2371)) - Exception in secureMain
   java.io.IOException: !JsseListener: java.lang.NullPointerException
   at
  
  
 org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:516)
   at
  
  
 org.apache.hadoop.security.Krb5AndCertsSslSocketConnector.newServerSocket(Krb5AndCertsSslSocketConnector.java:123)
   at
   org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
   at
  
 org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:283)
   at
  
 org.mortbay.jetty.bio.SocketConnector.doStart(SocketConnector.java:147)
  
   Is there any way to resolve this???
  
 
 
 
  --
  Alexander Lorenz
  http://mapredit.blogspot.com
 
  P Think of the environment: please don't print this email unless you
  really need to.
 
 



 --
 Alexander Lorenz
 http://mapredit.blogspot.com

 P Think of the environment: please don't print this email unless you
 really need to.



Re: Error while starting datanode in hadoop 0.23 in secure mode

2011-12-13 Thread sri ram
 16:19:28,101 WARN  mortbay.log (Slf4jLog.java:warn(76)) - failed
Server@179d854: java.io.IOException: !JsseListener:
java.lang.NullPointerExcepti$
2011-12-13 16:19:28,104 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped Krb5AndCertsSslSocketConnector@master:1005
2011-12-13 16:19:28,104 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@master:1006
2011-12-13 16:19:28,107 INFO  datanode.DataNode
(DataNode.java:shutdown(1741)) - Waiting for threadgroup to exit, active
threads is 0
2011-12-13 16:19:28,108 ERROR datanode.DataNode
(DataNode.java:secureMain(2371)) - Exception in secureMain
java.io.IOException: !JsseListener: java.lang.NullPointerException
at
org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:516)
at
org.apache.hadoop.security.Krb5AndCertsSslSocketConnector.newServerSocket(Krb5AndCertsSslSocketConnector.java:123)
at
org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
at
org.mortbay.jetty.AbstractConnector.doStart(AbstractConnector.java:283)
at
org.mortbay.jetty.bio.SocketConnector.doStart(SocketConnector.java:147)

at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.Server.doStart(Server.java:235)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer.start(HttpServer.java:639)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:575)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1501)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:457)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2263)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2196)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2219)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2367)
at
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:188)
2011-12-13 16:19:28,111 INFO  datanode.DataNode (StringUtils.java:run(605))
- SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down DataNode at master.example.com/147.128.152.179


On Tue, Dec 13, 2011 at 4:23 PM, sri ram rsriram...@gmail.com wrote:

 telnet master ip says unable to connect to remote host.
 This is my following property for datanode in secure mode
 property
 namedfs.datanode.https.address/name
 valuemaster:1005/value
 /property



 On Tue, Dec 13, 2011 at 4:18 PM, alo alt wget.n...@googlemail.com wrote:

 Check if kerberos respond:

 telnet KERBEROS_IP 1005

 As I know use kerberos per default ports 88, AFS token 746 (), kx509
 9878

 - Alex


 On Tue, Dec 13, 2011 at 11:39 AM, sri ram rsriram...@gmail.com wrote:
  Thanks for the reply,
 I have tried with the ip of the individual systems
 also.But
  the same eroor reoccurs
 
 
  On Tue, Dec 13, 2011 at 3:40 PM, alo alt wget.n...@googlemail.com
 wrote:
 
  Hi,
 
  2011-12-14 14:35:54,047 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
  failed Krb5AndCertsSslSocketConnector@0.0.0.0:1005:
  java.io.IOException: !JsseListener:$
  2011-12-14 14:35:54,048 WARN  mortbay.log (Slf4jLog.java:warn(76)) -
  failed Server@1867df9: java.io.IOException: !JsseListener:
  java.lang.NullPointerExcepti$
  2011-12-14 14:35:54,085 INFO  mortbay.log (Slf4jLog.java:info(67)) -
  Stopped Krb5AndCertsSslSocketConnector@0.0.0.0:1005
 
  0.0.0.0 as Kerberos - IP can't work.
 
  - Alex
 
  On Tue, Dec 13, 2011 at 10:27 AM, sri ram rsriram...@gmail.com
 wrote:
   Hi,
I receive the following error while starting datanode in
 secure
   mode of hadoop 0.23
  
   2011-12-14 14:35:48,468 INFO  http.HttpServer
   (HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
   (class=org.apache.hadoop.http.HttpServer$
   2011-12-14 14:35:48,471 WARN  lib.StaticUserWebFilter
   (StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi
 should
   not
   be used. Instead, use had$
   2011-12-14 14:35:48,472 INFO  http.HttpServer
   (HttpServer.java:addFilter(454)) - Added filter static_user_filter
   (class=org.apache.hadoop.http.lib.StaticUse$
   2011-12-14 14:35:48,472 INFO  http.HttpServer
   (HttpServer.java:addFilter(461)) - Added filter static_user_filter
   (class=org.apache.hadoop.http.lib.StaticUse$
   2011-12-14

Re: Error while starting datanode in hadoop 0.23 in secure mode

2011-12-13 Thread sri ram
Both master and master.example.com points to the current local address.
Namenode is running well.

On Tue, Dec 13, 2011 at 4:57 PM, alo alt wget.n...@googlemail.com wrote:

 Hi,

 master.example.com? I don't think that a NN is running there ;)
 And master are available in DNS?

 The config looks misconfigured, you have to setup a working environment.

 - Alex


 On Tue, Dec 13, 2011 at 12:02 PM, sri ram rsriram...@gmail.com wrote:
  The following is the content of mapred-site.xml
  ?xml version=1.0?
  ?xml-stylesheet href=configuration.xsl?
  configuration
  property
  namedfs.replication/name
  value1/value
  /property
  property
  namedfs.permissions/name
  valuefalse/value
  /property
  property
  namedfs.namenode.name.dir/name
  value/app/tmp/name/value
  /property
  property
  namedfs.datanode.data.dir/name
  value/app/tmp/data/value
  /property
  !--kerberos--
  !--NAMENODE CONF--
  property
  namedfs.block.access.token.enable/name
  valuetrue/value
  /property
  property
  namedfs.https.enable/name
  valuetrue/value
  /property
  property
  namedfs.namenode.https-address/name
  valuemaster.example.com:50470/value
  /property
  property
  namedfs.https.port/name
  value50470/value
  /property
  property
  namedfs.namenode.keytab.file/name
  value/etc/security/keytab/nn.service.keytab/value
  /property
  property
  namedfs.namenode.kerberos.principal/name
  valuenn/master.example@example.com/value
  /property
  property
  namedfs.namenode.kerberos.https.principal/name
  valuehost/master.example@example.com/value
  /property
  !--DATANODE CONF--
  property
  namedfs.datanode.data.dir.perm/name
  value700/value
  /property
  property
  namedfs.datanode.address/name
  valuemaster:1003/value
  /property
 
  property
  namedfs.datanode.https.address/name
  valuemaster:1005/value
  /property
  property
  namedfs.datanode.http.address/name
  valuemaster:1006/value
  /property
 
  property
  namedfs.datanode.keytab.file/name
  value/etc/security/keytab/dn.service.keytab/value
  /property
  property
  namedfs.datanode.kerberos.principal/name
  valuedn/master.example@example.com/value
  /property
  property
  namedfs.datanode.kerberos.https.principal/name
  valuehost/master.example@example.com/value
  /property
 
  /configuration
 
  The following is the error log while starting datanode as root
  2011-12-13 16:19:26,837 INFO  datanode.DataNode
  (StringUtils.java:startupShutdownMessage(589)) - STARTUP_MSG:
  /
  STARTUP_MSG: Starting DataNode
  STARTUP_MSG:   host = master.example.com/147.128.152.179
  STARTUP_MSG:   args = []
  STARTUP_MSG:   version = 0.23.0
  STARTUP_MSG:   classpath =
 
 /usr/local/hadoop/conf:/usr/local/hadoop/libexec/../share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/libexec/../share$
  STARTUP_MSG:   build =
  git://
 devadm900.cc1.ygridcore.net/grid/0/dev/acm/hadoop-trunk/hadoop-common-project/hadoop-common
  -r d4fee83ec1462ab9824add6449320617$
  /
  2011-12-13 16:19:26,921 WARN  common.Util (Util.java:stringAsURI(63)) -
 Path
  /app/tmp/data should be specified as a URI in configuration files. Please
  updat$
  2011-12-13 16:19:27,314 INFO  security.UserGroupInformation
  (UserGroupInformation.java:loginUserFromKeytab(633)) - Login successful
 for
  user dn/master.examp$
  2011-12-13 16:19:27,454 WARN  impl.MetricsConfig
  (MetricsConfig.java:loadFirst(125)) - Cannot locate configuration: tried
  hadoop-metrics2-datanode.propertie$
  2011-12-13 16:19:27,548 INFO  impl.MetricsSystemImpl
  (MetricsSystemImpl.java:startTimer(343)) - Scheduled snapshot period at
 10
  second(s).
  2011-12-13 16:19:27,548 INFO  impl.MetricsSystemImpl
  (MetricsSystemImpl.java:start(182)) - DataNode metrics system started
  2011-12-13 16:19:27,549 INFO  impl.MetricsSystemImpl
  (MetricsSystemImpl.java:registerSource(244)) - Registered source
 UgiMetrics
  2011-12-13 16:19:27,572 INFO  datanode.DataNode
  (DataNode.java:initDataXceiver(701)) - Opened info server at 1003
  2011-12-13 16:19:27,585 INFO  datanode.DataNode
  (DataXceiverServer.java:init(77)) - Balancing bandwith is 1048576
 bytes/s
  2011-12-13 16:19:27,617 INFO  mortbay.log (Slf4jLog.java:info(67)) -
 Logging
  to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
  org.mortbay.log.Slf4j$
  2011-12-13 16:19:27,703 INFO  http.HttpServer
  (HttpServer.java:addGlobalFilter(476)) - Added global filter 'safety'
  (class=org.apache.hadoop.http.HttpServer$
  2011-12-13 16:19:27,706 WARN  lib.StaticUserWebFilter
  (StaticUserWebFilter.java:getUsernameFromConf(141)) - dfs.web.ugi should
 not
  be used. Instead, use had$
  2011-12-13 16:19:27,707 INFO  http.HttpServer
  (HttpServer.java:addFilter(454)) - Added filter static_user_filter
  (class=org.apache.hadoop.http.lib.StaticUse$
  2011-12-13 16:19:27,707 INFO  http.HttpServer
  (HttpServer.java:addFilter(461)) - Added filter static_user_filter

Cannot start secure cluster without privileged resources.

2011-12-12 Thread sri ram
HI,
   I TRIED INSTALLING HADOOP SECURE MODE IN 0.23 VERSION.I STUCK UP
WITH THE ERROR OF

java.lang.RuntimeException: Cannot start secure cluster without privileged
resources.
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1487)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:457)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2263)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2196)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2219)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2367)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2384)
2011-12-12 13:24:04,419 INFO  datanode.DataNode (StringUtils.java:run(605))
- SHUTDOWN_MSG:

I tried this post
http://www.mail-archive.com/common-user@hadoop.apache.org/msg13679.html
But fail to configure.


hadoop 0.23 secure mode error

2011-12-12 Thread sri ram
Hi,
  I am trying to form a hadoop cluster of 0.23 version in secure
mode.
   While starting nodemanager i get the following error
2011-12-12 15:37:26,874 INFO  ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface org.apac$
2011-12-12 15:37:26,953 INFO  nodemanager.NodeStatusUpdaterImpl
(NodeStatusUpdaterImpl.java:registerWithRM(155)) - Connected to
ResourceManager at master:80$
2011-12-12 15:37:38,784 WARN  ipc.Client (Client.java:run(526)) - Couldn't
setup connection for nm/ad...@master.example.com to rm/
ad...@master.example.com
2011-12-12 15:37:38,787 ERROR service.CompositeService
(CompositeService.java:start(72)) - Error starting services
org.apache.hadoop.yarn.server.nodemanager$
org.apache.avro.AvroRuntimeException:
java.lang.reflect.UndeclaredThrowableException
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:132)
at
org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:163)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:231)
Caused by: java.lang.reflect.UndeclaredThrowableException
at
org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:161)
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:128)
... 3 more
Caused by: com.google.protobuf.ServiceException: java.io.IOException:
Failed on local exception: java.io.IOException: Couldn't setup connection
for nm/admin$
at
org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
at $Proxy14.registerNodeManager(Unknown Source)
at
org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
... 5 more
Caused by: java.io.IOException: Failed on local exception:
java.io.IOException: Couldn't setup connection for nm/
ad...@master.example.com to rm/admin@MASTER$
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
at org.apache.hadoop.ipc.Client.call(Client.java:1089)


Any help is appreciated


hadoop 0.23 secure mode error

2011-12-12 Thread sri ram
Hi,
  I am trying to form a hadoop cluster of 0.23 version in secure
mode.
   While starting nodemanager i get the following error
2011-12-12 15:37:26,874 INFO  ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface org.apac$
2011-12-12 15:37:26,953 INFO  nodemanager.NodeStatusUpdaterImpl
(NodeStatusUpdaterImpl.java:registerWithRM(155)) - Connected to
ResourceManager at master:80$
2011-12-12 15:37:38,784 WARN  ipc.Client (Client.java:run(526)) - Couldn't
setup connection for nm/ad...@master.example.com to rm/
ad...@master.example.com
2011-12-12 15:37:38,787 ERROR service.CompositeService
(CompositeService.java:start(72)) - Error starting services
org.apache.hadoop.yarn.server.nodemanager$
org.apache.avro.AvroRuntimeException:
java.lang.reflect.UndeclaredThrowableException
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:132)
at
org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:163)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:231)
Caused by: java.lang.reflect.UndeclaredThrowableException
at
org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:161)
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:128)
... 3 more
Caused by: com.google.protobuf.ServiceException: java.io.IOException:
Failed on local exception: java.io.IOException: Couldn't setup connection
for nm/admin$
at
org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
at $Proxy14.registerNodeManager(Unknown Source)
at
org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
... 5 more
Caused by: java.io.IOException: Failed on local exception:
java.io.IOException: Couldn't setup connection for nm/
ad...@master.example.com to rm/admin@MASTER$
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
at org.apache.hadoop.ipc.Client.call(Client.java:1089)


Any help is appreciated


Re: hadoop 0.23 secure mode error

2011-12-12 Thread sri ram
Actually i am trying it locally just between 3 systems.I I have generated
the keytabs in kerberos and added these users in acl.Is there any other
configurations required???

On Mon, Dec 12, 2011 at 10:51 PM, Robert Evans ev...@yahoo-inc.com wrote:

  It looks like you do not have nm/ad...@master.example.com configured in
 your kerberos setup.  I wander how much traffic example.com gets on a
 daily basis.

 --Bobby Evans


 On 12/12/11 4:15 AM, sri ram rsriram...@gmail.com wrote:

 Hi,
   I am trying to form a hadoop cluster of 0.23 version in secure
 mode.
While starting nodemanager i get the following error
 2011-12-12 15:37:26,874 INFO  ipc.HadoopYarnRPC
 (HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
 proxy for protocol interface org.apac$
 2011-12-12 15:37:26,953 INFO  nodemanager.NodeStatusUpdaterImpl
 (NodeStatusUpdaterImpl.java:registerWithRM(155)) - Connected to
 ResourceManager at master:80$
 2011-12-12 15:37:38,784 WARN  ipc.Client (Client.java:run(526)) - Couldn't
 setup connection for nm/ad...@master.example.com to
 rm/ad...@master.example.com
 2011-12-12 15:37:38,787 ERROR service.CompositeService
 (CompositeService.java:start(72)) - Error starting services
 org.apache.hadoop.yarn.server.nodemanager$
 org.apache.avro.AvroRuntimeException:
 java.lang.reflect.UndeclaredThrowableException
 at
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:132)
 at
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
 at
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:163)
 at
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:231)
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
 at
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:161)
 at
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:128)
 ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.io.IOException:
 Failed on local exception: java.io.IOException: Couldn't setup connection
 for nm/admin$
 at
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
 at $Proxy14.registerNodeManager(Unknown Source)
 at
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
 ... 5 more
 Caused by: java.io.IOException: Failed on local exception:
 java.io.IOException: Couldn't setup connection for
 nm/ad...@master.example.com to rm/admin@MASTER$
 at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
 at org.apache.hadoop.ipc.Client.call(Client.java:1089)


 Any help is appreciated




Registration of Node Manger Failed

2011-11-25 Thread sri ram
Hi,
   I try to install hadoop 0.23 and form a small cluster with 3
machines.
   Whenever i try to start nodemanager and resource manager.The
nodemanager fails to start by throwing the following error log.And the
nodemanager fails in both master and slaves.

2011-11-25 13:40:15,244 INFO  service.AbstractService
(AbstractService.java:start(61)) - Service:Dispatcher is started.
2011-11-25 13:40:15,244 INFO  ipc.YarnRPC (YarnRPC.java:create(47)) -
Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
2011-11-25 13:40:15,246 INFO  ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface
org.apache.hadoop.yarn.server.api.ResourceTracker
2011-11-25 13:40:15,289 INFO  nodemanager.NodeStatusUpdaterImpl
(NodeStatusUpdaterImpl.java:registerWithRM(155)) - Connected to
ResourceManager at master:8025
2011-11-25 13:40:15,407 ERROR service.CompositeService
(CompositeService.java:start(72)) - Error starting services
org.apache.hadoop.yarn.server.nodemanager.NodeManager
org.apache.avro.AvroRuntimeException: org.apache.hadoop.yarn.YarnException:
Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager
failed
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:132)
at
org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:163)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:231)
Caused by: org.apache.hadoop.yarn.YarnException: Recieved SHUTDOWN signal
from Resourcemanager ,Registration of NodeManager failed
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:165)
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:128)
... 3 more
2011-11-25 13:40:15,408 INFO  event.AsyncDispatcher
(AsyncDispatcher.java:run(71)) - AsyncDispatcher thread interrupted
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:69)
at java.lang.Thread.run(Thread.java:636)
2011-11-25 13:40:15,410 INFO  service.AbstractService
(AbstractService.java:stop(75)) - Service:Dispatcher is stopped.
2011-11-25 13:40:15,470 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@0.0.0.0:
2011-11-25 13:40:15,588 INFO  service.AbstractService
(AbstractService.java:stop(75)) -
Service:org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer is
stopped.
2011-11-25 13:40:15,589 INFO  ipc.Server (Server.java:stop(1709)) -
Stopping server on 59072
2011-11-25 13:40:15,589 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 0 on 59072: exiting
2011-11-25 13:40:15,590 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 1 on 59072: exiting
2011-11-25 13:40:15,590 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 2 on 59072: exiting
2011-11-25 13:40:15,591 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 3 on 59072: exiting
2011-11-25 13:40:15,591 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 4 on 59072: exiting
2011-11-25 13:40:15,593 INFO  ipc.Server (Server.java:run(495)) - Stopping
IPC Server listener on 59072
2011-11-25 13:40:15,594 INFO  service.AbstractService
(AbstractService.java:stop(75)) -
Service:org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler
is stopped.
2011-11-25 13:40:15,600 INFO  event.AsyncDispatcher
(AsyncDispatcher.java:run(71)) - AsyncDispatcher thread interrupted
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:69)
at java.lang.Thread.run(Thread.java:636)
2011-11-25 13:40:15,601 INFO  ipc.Server (Server.java:run(637)) - Stopping
IPC Server Responder
2011-11-25 13:40:15,601 INFO  service.AbstractService
(AbstractService.java:stop(75)) - Service:Dispatcher is stopped.
2011-11-25 13:40:15,602 WARN  monitor.ContainersMonitorImpl
(ContainersMonitorImpl.java:run(464)) -
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
is interrupted. Exiting.
2011-11-25 13:40:15,602 INFO  

Registration of Node Manger Failed

2011-11-25 Thread sri ram
Hi,
   I try to install hadoop 0.23 and form a small cluster with 3
machines.
   Whenever i try to start nodemanager and resource manager.The
nodemanager fails to start by throwing the following error log.And the
nodemanager fails in both master and slaves.

2011-11-25 13:40:15,244 INFO  service.AbstractService
(AbstractService.java:start(61)) - Service:Dispatcher is started.
2011-11-25 13:40:15,244 INFO  ipc.YarnRPC (YarnRPC.java:create(47)) -
Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
2011-11-25 13:40:15,246 INFO  ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface
org.apache.hadoop.yarn.server.api.ResourceTracker
2011-11-25 13:40:15,289 INFO  nodemanager.NodeStatusUpdaterImpl
(NodeStatusUpdaterImpl.java:registerWithRM(155)) - Connected to
ResourceManager at master:8025
2011-11-25 13:40:15,407 ERROR service.CompositeService
(CompositeService.java:start(72)) - Error starting services
org.apache.hadoop.yarn.server.nodemanager.NodeManager
org.apache.avro.AvroRuntimeException: org.apache.hadoop.yarn.YarnException:
Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager
failed
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:132)
at
org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:163)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:231)
Caused by: org.apache.hadoop.yarn.YarnException: Recieved SHUTDOWN signal
from Resourcemanager ,Registration of NodeManager failed
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:165)
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:128)
... 3 more
2011-11-25 13:40:15,408 INFO  event.AsyncDispatcher
(AsyncDispatcher.java:run(71)) - AsyncDispatcher thread interrupted
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:69)
at java.lang.Thread.run(Thread.java:636)
2011-11-25 13:40:15,410 INFO  service.AbstractService
(AbstractService.java:stop(75)) - Service:Dispatcher is stopped.
2011-11-25 13:40:15,470 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@0.0.0.0:
2011-11-25 13:40:15,588 INFO  service.AbstractService
(AbstractService.java:stop(75)) -
Service:org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer is
stopped.
2011-11-25 13:40:15,589 INFO  ipc.Server (Server.java:stop(1709)) -
Stopping server on 59072
2011-11-25 13:40:15,589 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 0 on 59072: exiting
2011-11-25 13:40:15,590 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 1 on 59072: exiting
2011-11-25 13:40:15,590 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 2 on 59072: exiting
2011-11-25 13:40:15,591 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 3 on 59072: exiting
2011-11-25 13:40:15,591 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 4 on 59072: exiting
2011-11-25 13:40:15,593 INFO  ipc.Server (Server.java:run(495)) - Stopping
IPC Server listener on 59072
2011-11-25 13:40:15,594 INFO  service.AbstractService
(AbstractService.java:stop(75)) -
Service:org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler
is stopped.
2011-11-25 13:40:15,600 INFO  event.AsyncDispatcher
(AsyncDispatcher.java:run(71)) - AsyncDispatcher thread interrupted
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:69)
at java.lang.Thread.run(Thread.java:636)
2011-11-25 13:40:15,601 INFO  ipc.Server (Server.java:run(637)) - Stopping
IPC Server Responder
2011-11-25 13:40:15,601 INFO  service.AbstractService
(AbstractService.java:stop(75)) - Service:Dispatcher is stopped.
2011-11-25 13:40:15,602 WARN  monitor.ContainersMonitorImpl
(ContainersMonitorImpl.java:run(464)) -
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
is interrupted. Exiting.
2011-11-25 13:40:15,602 INFO  

Registration of Node Manger Failed

2011-11-25 Thread sri ram
Hi,
   I try to install hadoop 0.23 and form a small cluster with 3
machines.
   Whenever i try to start nodemanager and resource manager.The
nodemanager fails to start by throwing the following error log.And the
nodemanager fails in both master and slaves.

2011-11-25 13:40:15,244 INFO  service.AbstractService
(AbstractService.java:start(61)) - Service:Dispatcher is started.
2011-11-25 13:40:15,244 INFO  ipc.YarnRPC (YarnRPC.java:create(47)) -
Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
2011-11-25 13:40:15,246 INFO  ipc.HadoopYarnRPC
(HadoopYarnProtoRPC.java:getProxy(48)) - Creating a HadoopYarnProtoRpc
proxy for protocol interface
org.apache.hadoop.yarn.server.api.ResourceTracker
2011-11-25 13:40:15,289 INFO  nodemanager.NodeStatusUpdaterImpl
(NodeStatusUpdaterImpl.java:registerWithRM(155)) - Connected to
ResourceManager at master:8025
2011-11-25 13:40:15,407 ERROR service.CompositeService
(CompositeService.java:start(72)) - Error starting services
org.apache.hadoop.yarn.server.nodemanager.NodeManager
org.apache.avro.AvroRuntimeException: org.apache.hadoop.yarn.YarnException:
Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager
failed
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:132)
at
org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:163)
at
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:231)
Caused by: org.apache.hadoop.yarn.YarnException: Recieved SHUTDOWN signal
from Resourcemanager ,Registration of NodeManager failed
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:165)
at
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:128)
... 3 more
2011-11-25 13:40:15,408 INFO  event.AsyncDispatcher
(AsyncDispatcher.java:run(71)) - AsyncDispatcher thread interrupted
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:69)
at java.lang.Thread.run(Thread.java:636)
2011-11-25 13:40:15,410 INFO  service.AbstractService
(AbstractService.java:stop(75)) - Service:Dispatcher is stopped.
2011-11-25 13:40:15,470 INFO  mortbay.log (Slf4jLog.java:info(67)) -
Stopped SelectChannelConnector@0.0.0.0:
2011-11-25 13:40:15,588 INFO  service.AbstractService
(AbstractService.java:stop(75)) -
Service:org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer is
stopped.
2011-11-25 13:40:15,589 INFO  ipc.Server (Server.java:stop(1709)) -
Stopping server on 59072
2011-11-25 13:40:15,589 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 0 on 59072: exiting
2011-11-25 13:40:15,590 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 1 on 59072: exiting
2011-11-25 13:40:15,590 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 2 on 59072: exiting
2011-11-25 13:40:15,591 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 3 on 59072: exiting
2011-11-25 13:40:15,591 INFO  ipc.Server (Server.java:run(1533)) - IPC
Server handler 4 on 59072: exiting
2011-11-25 13:40:15,593 INFO  ipc.Server (Server.java:run(495)) - Stopping
IPC Server listener on 59072
2011-11-25 13:40:15,594 INFO  service.AbstractService
(AbstractService.java:stop(75)) -
Service:org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler
is stopped.
2011-11-25 13:40:15,600 INFO  event.AsyncDispatcher
(AsyncDispatcher.java:run(71)) - AsyncDispatcher thread interrupted
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386)
at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:69)
at java.lang.Thread.run(Thread.java:636)
2011-11-25 13:40:15,601 INFO  ipc.Server (Server.java:run(637)) - Stopping
IPC Server Responder
2011-11-25 13:40:15,601 INFO  service.AbstractService
(AbstractService.java:stop(75)) - Service:Dispatcher is stopped.
2011-11-25 13:40:15,602 WARN  monitor.ContainersMonitorImpl
(ContainersMonitorImpl.java:run(464)) -
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
is interrupted. Exiting.
2011-11-25 13:40:15,602 INFO  

Unable to connect to the url

2011-03-20 Thread James Ram
Hi,

I am using a standalone linux machine. Namenode and Datanode are running.
But when I try to access the UI in my browser its showing unable to
connect error. I know its a basic question please help me. I have given
below the configuration I am using.

*Core-site.xml*

property
  namefs.default.name/name
valuehdfs://160.110.185.93//value
/property


*Mapred-site.xml*

configuration
property
namemapred.job.tracker/name
value160.110.185.93:8021/value
finaltrue/final
/property
property
namemapred.local.dir/name
value/home/sao_user/hadoop_sws/mapred/local/value
finaltrue/final
/property
property
namemapred.tasktracker.map.tasks.maximum/name
value2/value
finaltrue/final
/property
property
namemapred.tasktracker.reduce.tasks.maximum/name
value2/value
finaltrue/final
/property
property
namemapred.child.java.opts/name
value-Xmx400m/value
/property
/configuration

*Hdfs-site.xml*

configuration
property
namedfs.name.dir/name
value/home/sao_user/tmp/hadoop-sao_user/hdfs/name/value
finaltrue/final
/property

property
namefs.checkpoint.dir/name

value/home/sao_user/tmp/hadoop-sao_user/hdfs/namesecd/value
finaltrue/final
/property
/configuration


Stopping datanodes dynamically

2011-01-31 Thread Jam Ram

How to remove the multiple datanodes dynamically from the masternode without
stopping it?
-- 
View this message in context: 
http://old.nabble.com/Stopping-datanodes-dynamically-tp30804859p30804859.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Urgent Need: Sr. Developer - Hadoop Hive | Cupertino, CA

2010-10-29 Thread Ram Prakash

Job Title: Sr. Developer - Hadoop Hive
Location: Cupertino, CA

Relevant Experience (Yrs)   10 +  Yrs

Technical/Functional Skills 10+ years of strong technical and
implementation experience in diversified data warehouse technologies like
1. Teradata
2. Hadoop-Hive
3. GreenPlum
4. MongoDB
5. Oracle Cohorence
6. Timestan
- Good understanding on pros and cons on data warehousing technologies
- Have past experience in evaluating Data Warehousing technologies
- Handled large volume of data for processing and reporting
- Possess good team leader skills

Roles  ResponsibilitiesTechnical expert in EDW

Please send me resume with contact information.


Thanks,
Ram Prakash
E-Solutionsin, Inc 
ram.prak...@e-solutionsinc.com
www.e-solutionsinc.com
-- 
View this message in context: 
http://old.nabble.com/Urgent-Need%3A-Sr.-Developer---Hadoop-Hive-%7C-Cupertino%2C-CA-tp30088922p30088922.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.



Re: Enabling IHbase

2010-05-16 Thread Ram Kulbak
Hi Renato,

I've published an updated version of IHBASE. It's available to download from
http://github.com/ykulbak/ihbase/downloads.
I've also added a wiki page explaining how to get started at
http://wiki.github.com/ykulbak/ihbase/getting-started
Getting started is very simple:

1. Edit hbase-site.xml and set IdxRegion as the region implementation:
property
  namehbase.hregion.impl/name
  valueorg.apache.hadoop.hbase.regionserver.IdxRegion/value
/property

2. Edit hbase-env.sh and add ihbase jar and commons-lang version 2.4 jar to
hbase classpath

3. You can use the code example on the wiki page to test your setup

Many usage examples can be found in the tests or you can ask me, I'll be
glad to help.

Yoram

On Fri, May 14, 2010 at 12:32 AM, Ram Kulbak ram.kul...@gmail.com wrote:

 Hi Renato,

 IHBASE is currently broken. I expect to have it fixed tomorrow or the day
 after.
 When it's fixed, I'll publish a release under
 http://github.com/ykulbak/ihbase and add a wiki page explaining how to get
 started. I'll also send a note to the mailing list.
 Please feel free to contact me regarding issues with IHBASE.

 Yoram



 On Thu, May 13, 2010 at 2:25 AM, Stack st...@duboce.net wrote:

 You saw this package doc over in the ihbase's new home on github?

 http://github.com/ykulbak/ihbase/blob/master/src/main/java/org/apache/hadoop/hbase/client/idx/package.html
  It'll read better if you build the javadoc.  There is also this:
 http://github.com/ykulbak/ihbase/blob/master/README

 St.Ack

 On Wed, May 12, 2010 at 8:27 AM, Renato Marroquín Mogrovejo
 renatoj.marroq...@gmail.com wrote:
  Hi Alex,
 
  Thanks for your help, but I meant something more like a how-to set it up
  thing, or like a tutorial of it (=
  I also read these ones if anyone else is interested.
 
  http://blog.sematext.com/2010/03/31/hbase-digest-march-2010/
  http://search-hadoop.com/m/5MBst1uL87b1
 
  Renato M.
 
 
 
  2010/5/12 alex kamil alex.ka...@gmail.com
 
  regarding usage this may be helpful
  https://issues.apache.org/jira/browse/HBASE-2167
 
 
  On Wed, May 12, 2010 at 10:48 AM, alex kamil alex.ka...@gmail.com
 wrote:
 
  Renato,
 
  just noticed you are looking for *Indexed *Hbase
 
  i found this
 
 http://blog.reactive.org/2010/03/indexed-hbase-it-might-not-be-what-you.html
 
  Alex
 
 
  On Wed, May 12, 2010 at 10:42 AM, alex kamil alex.ka...@gmail.com
 wrote:
 
 
 
 http://www.google.com/search?hl=ensource=hpq=hbase+tutorialaq=faqi=g-p1g-sx3g1g-sx4g-msx1aql=oq=gs_rfai=
 
 
  On Wed, May 12, 2010 at 10:25 AM, Renato Marroquín Mogrovejo 
  renatoj.marroq...@gmail.com wrote:
 
  Hi eveyone,
 
  I just read about IHbase and seems like something I could give it a
 try,
  but
  I haven't been able to find information (besides descriptions and
  advantages) regarding to how to install it or use it.
  Thanks in advance.
 
  Renato M.
 
 
 
 
 
 





Re: Enabling IHbase

2010-05-13 Thread Ram Kulbak
Hi Renato,

IHBASE is currently broken. I expect to have it fixed tomorrow or the day
after.
When it's fixed, I'll publish a release under
http://github.com/ykulbak/ihbase and add a wiki page explaining how to get
started. I'll also send a note to the mailing list.
Please feel free to contact me regarding issues with IHBASE.

Yoram


On Thu, May 13, 2010 at 2:25 AM, Stack st...@duboce.net wrote:

 You saw this package doc over in the ihbase's new home on github?

 http://github.com/ykulbak/ihbase/blob/master/src/main/java/org/apache/hadoop/hbase/client/idx/package.html
  It'll read better if you build the javadoc.  There is also this:
 http://github.com/ykulbak/ihbase/blob/master/README

 St.Ack

 On Wed, May 12, 2010 at 8:27 AM, Renato Marroquín Mogrovejo
 renatoj.marroq...@gmail.com wrote:
  Hi Alex,
 
  Thanks for your help, but I meant something more like a how-to set it up
  thing, or like a tutorial of it (=
  I also read these ones if anyone else is interested.
 
  http://blog.sematext.com/2010/03/31/hbase-digest-march-2010/
  http://search-hadoop.com/m/5MBst1uL87b1
 
  Renato M.
 
 
 
  2010/5/12 alex kamil alex.ka...@gmail.com
 
  regarding usage this may be helpful
  https://issues.apache.org/jira/browse/HBASE-2167
 
 
  On Wed, May 12, 2010 at 10:48 AM, alex kamil alex.ka...@gmail.com
 wrote:
 
  Renato,
 
  just noticed you are looking for *Indexed *Hbase
 
  i found this
 
 http://blog.reactive.org/2010/03/indexed-hbase-it-might-not-be-what-you.html
 
  Alex
 
 
  On Wed, May 12, 2010 at 10:42 AM, alex kamil alex.ka...@gmail.com
 wrote:
 
 
 
 http://www.google.com/search?hl=ensource=hpq=hbase+tutorialaq=faqi=g-p1g-sx3g1g-sx4g-msx1aql=oq=gs_rfai=
 
 
  On Wed, May 12, 2010 at 10:25 AM, Renato Marroquín Mogrovejo 
  renatoj.marroq...@gmail.com wrote:
 
  Hi eveyone,
 
  I just read about IHbase and seems like something I could give it a
 try,
  but
  I haven't been able to find information (besides descriptions and
  advantages) regarding to how to install it or use it.
  Thanks in advance.
 
  Renato M.
 
 
 
 
 
 



Re: Regarding IntSet implementation

2010-05-12 Thread Ram Kulbak
Hi Lekhnath,

The IntSets are package protected so that their callers will always use the
IntSet interface, thus preventing manipulation of the IntSet after it was
built and hiding implementation details. It seems to me that having an index
which can spill to disk may be a handy feature, perhaps you can create a
patch with your suggested changes/additions?
The latest version of IHBASE can be obtained from
http://github.com/ykulbak/ihbase

Cheers,
Yoram


On Mon, May 10, 2010 at 9:17 PM, Lekhnath lbhu...@veriskhealth.com wrote:

 Hi folks,
 I have to use numerous search criteria and each having lots of distinct
  values. So, the secondary indexing like IHBase will require lots of memory.
 I think I require a custom index implementation in which I decided to
 persist some of the IHBase like implementation. For that case I need to
 reuse the IHBase' IntSet implementations. They are package protected so that
 I could not extend the implementation and  am forced to rewrite the code.
 Is there any good reason why the implementations are package protected.

 Thanks,
 Lekhnath



 This email is intended for the recipient only. If you are not the intended
 recipient please disregard, and do not use the information for any purpose.



Re: [Indexed HBase] Can I add index in an existing table?

2010-02-26 Thread Ram Kulbak
Hi Shen,

The first thing you need to verify is that you can switch to the
IdxRegion implementation without problems. I've just checked that the
following steps work on the PerformanceEvaluation tables. I would
suggest you backup your hbase production instance before attempting
this (or create and try it out on a sandbox instance)

* Stop hbase
* Edit  conf/hbase-env.sh file and add IHBASE to your classpath.
Here's an example which assumes you don't need to add anything else to
your classpath, make sure the HBASE_HOME is defined or simply
substiute it with the full path of hbase installation directory:
   export HBASE_CLASSPATH=(`find $HBASE_HOME/contrib/indexed -name
'*jar' | tr -s \n :`)

* Edit conf/hbase-site.xml and set IdxRegion to be the region implementation:

 property
 namehbase.hregion.impl/name
 valueorg.apache.hadoop.hbase.regionserver.IdxRegion/value
 /property

* Propagate the configuration to all slaves
* Start HBASE


Next, modify the table you want to index using code similar to this:

HBaseConfiguration conf = new HBaseConfiguration();

HBaseAdmin admin = new HBaseAdmin(conf);
admin.disableTable(TABLE_NAME);
admin.modifyColumn(TABLE_NAME, FAMILY_NAME1, IDX_COLUMN_DESCRIPTOR1);
  ...
admin.modifyColumn(TABLE_NAME, FAMILY_NAMEN, IDX_COLUMN_DESCRIPTORN);
  admin.enableTable(TABLE_NAME);

Wait for the table to get indexed. This may take a few minutes. Check
the master web page and verify your index definitions appear correctly
in the table description.

This is it. Please let me know how it goes.

Yoram




2010/2/26 ChingShen chingshenc...@gmail.com:
 Thanks, But I think I need the indexed HBase rather than transactional
 HBase.

 Shen

 2010/2/26 y_823...@tsmc.com

 You can try my code to create a index in the existing table.

 public void AddIdx2ExistingTable(String tablename,String
 columnfamily,String idx_column) throws IOException {
IndexedTableAdmin admin = null;
  admin = new IndexedTableAdmin(config);
  admin.addIndex(Bytes.toBytes(tablename), new
 IndexSpecification(idx_column,
  Bytes.toBytes(columnfamily+:+idx_column)));
 }




 Fleming Chiu(邱宏明)
 707-6128
 y_823...@tsmc.com
 週一無肉日吃素救地球(Meat Free Monday Taiwan)





  ChingShen
  chingshenc...@gmTo:  hbase-user 
 hbase-user@hadoop.apache.org
  ail.com cc:  (bcc:
 Y_823910/TSMC)
   Subject: [Indexed HBase] Can
 I add index in an existing table?
  2010/02/26 10:18
  AM
  Please respond to
  hbase-user






 Hi,

 I got http://issues.apache.org/jira/browse/HBASE-2037 that can create a
 new
 table with index, but can I add index in an existing table?
 Any code examples?

 Thanks.

 Shen





  ---
 TSMC PROPERTY
  This email communication (and any attachments) is proprietary information
  for the sole use of its
  intended recipient. Any unauthorized review, use or distribution by anyone
  other than the intended
  recipient is strictly prohibited.  If you are not the intended recipient,
  please notify the sender by
  replying to this email, and then delete this email and any copies of it
  immediately. Thank you.

  ---






 --
 *
 Ching-Shen Chen
 Advanced Technology Center,
 Information  Communications Research Lab.
 E-mail: chenchings...@itri.org.tw
 Tel:+886-3-5915542
 *



Re: [Indexed HBase] Can I add index in an existing table?

2010-02-26 Thread Ram Kulbak
Posting a reply to a question I got off list:

 Ram:
 How do I specify index in HColumnDescriptor that is passed to modifyColumn()
 ?

 Thanks



You will need to use an IdxColumnDescriptor:

Here's a code example for creating a table with a byte array index:

   HTableDescriptor tableDescriptor = new HTableDescriptor(TABLE_NAME);
   IdxColumnDescriptor idxColumnFamilyDescriptor = new
IdxColumnDescriptor(FAMILY_NAME);
   try {
 idxColumnFamilyDescriptor.addIndexDescriptor(
   new IdxIndexDescriptor(QUALIFIER_NAME, IdxQualifierType.BYTE_ARRAY)
 );
   } catch (IOException e) {
 throw new IllegalStateException(e);
   }
   tableDescriptor.addFamily(idxColumnFamilyDescriptor);


You can add several index descriptors to the same column family and
you can put indexes on more than one column families. You should use
IdxScan with an org.apache.hadoop.hbase.client.idx.exp.Expression set
to match your query criteria. The expression may cross columns from
the same or different families using ANDs and ORs.

Note that several index types are supported. Current types include all
basic types and BigDecimals. Char arrays are also supported.  Types
allow for correct range checking (for example you can quickly evaluate
a scan getting all rows for which a given column has values between 42
and 314). You should make sure that columns which are indexed with a
given qualifier type are actually populated with bytes matching their
type, e.g. it you use IdxQualifierType.LONG make sure that you
actually put values which are 8-long byte arrays which were produced
in a method similar to Bytes.toBytes(long).

Yoram


2010/2/26 Ram Kulbak ram.kul...@gmail.com:
 Hi Shen,

 The first thing you need to verify is that you can switch to the
 IdxRegion implementation without problems. I've just checked that the
 following steps work on the PerformanceEvaluation tables. I would
 suggest you backup your hbase production instance before attempting
 this (or create and try it out on a sandbox instance)

 * Stop hbase
 * Edit  conf/hbase-env.sh file and add IHBASE to your classpath.
 Here's an example which assumes you don't need to add anything else to
 your classpath, make sure the HBASE_HOME is defined or simply
 substiute it with the full path of hbase installation directory:
   export HBASE_CLASSPATH=(`find $HBASE_HOME/contrib/indexed -name
 '*jar' | tr -s \n :`)

 * Edit conf/hbase-site.xml and set IdxRegion to be the region implementation:

  property
 namehbase.hregion.impl/name
 valueorg.apache.hadoop.hbase.regionserver.IdxRegion/value
  /property

 * Propagate the configuration to all slaves
 * Start HBASE


 Next, modify the table you want to index using code similar to this:

HBaseConfiguration conf = new HBaseConfiguration();

HBaseAdmin admin = new HBaseAdmin(conf);
admin.disableTable(TABLE_NAME);
admin.modifyColumn(TABLE_NAME, FAMILY_NAME1, IDX_COLUMN_DESCRIPTOR1);
  ...
admin.modifyColumn(TABLE_NAME, FAMILY_NAMEN, IDX_COLUMN_DESCRIPTORN);
  admin.enableTable(TABLE_NAME);

 Wait for the table to get indexed. This may take a few minutes. Check
 the master web page and verify your index definitions appear correctly
 in the table description.

 This is it. Please let me know how it goes.

 Yoram




 2010/2/26 ChingShen chingshenc...@gmail.com:
 Thanks, But I think I need the indexed HBase rather than transactional
 HBase.

 Shen

 2010/2/26 y_823...@tsmc.com

 You can try my code to create a index in the existing table.

 public void AddIdx2ExistingTable(String tablename,String
 columnfamily,String idx_column) throws IOException {
IndexedTableAdmin admin = null;
  admin = new IndexedTableAdmin(config);
  admin.addIndex(Bytes.toBytes(tablename), new
 IndexSpecification(idx_column,
  Bytes.toBytes(columnfamily+:+idx_column)));
 }




 Fleming Chiu(邱宏明)
 707-6128
 y_823...@tsmc.com
 週一無肉日吃素救地球(Meat Free Monday Taiwan)





  ChingShen
  chingshenc...@gmTo:  hbase-user 
 hbase-user@hadoop.apache.org
  ail.com cc:  (bcc:
 Y_823910/TSMC)
   Subject: [Indexed HBase] Can
 I add index in an existing table?
  2010/02/26 10:18
  AM
  Please respond to
  hbase-user






 Hi,

 I got http://issues.apache.org/jira/browse/HBASE-2037 that can create a
 new
 table with index, but can I add index in an existing table?
 Any code examples?

 Thanks.

 Shen





  ---
 TSMC PROPERTY
  This email communication (and any attachments) is proprietary information
  for the sole use of its
  intended recipient. Any unauthorized review, use or distribution by anyone
  other than the intended
  recipient is strictly prohibited.  If you

Re: Atomic update of a single row

2010-01-26 Thread Ram Kulbak
I think that the scanning logic was fixed in 0.20.3 (memstore is now cloned).
It's actually GETs that are still not atomic, try running
TestHRegion.testWritesWhileGetting while increasing numQualifiers to
1000.

Regards,
Yoram

On Wed, Jan 27, 2010 at 8:48 AM, Ryan Rawson ryano...@gmail.com wrote:
 Under scanners and log recovery there is no guarantee to row
 atomicity.  This is to be fixed in 0.21 when log recovery is now a
 real possibility (thanks to HDFS-0.21) and scanners need to be fixed
 since the current get code might be replaced with a 1 row scan call.

 -ryan

 On Tue, Jan 26, 2010 at 12:53 PM, Bruno Dumon br...@outerthought.org wrote:
 The lock will in any case cause that writes don't happen concurrently.

 But if a region server were to die between the updates to two column
 families of one row (that are done in one Put operation), would the
 update then be partially applied?

 And that makes me also wonder: do these locks also apply to reads?
 Thus, will all the updates to one row that are part of one Put
 operation become visible 'atomicly' to readers?

 Thanks for any clarification.

 Bruno.

 On Tue, Jan 26, 2010 at 8:02 PM, Jean-Daniel Cryans jdcry...@apache.org 
 wrote:
 In get and put inside HRegion we call that line

 Integer lid = getLock(lockid, row);

 Even if you don't provide a row lock, it will create one for you and
 do the locking stuff. That happens before everything else, so is it
 fair to say that row reads are atomic?

 J-D

 On Tue, Jan 26, 2010 at 1:42 AM, Bruno Dumon br...@outerthought.org wrote:
 Hi,

 At various places I have read that row writes are atomic.

 However, from a curious look at the code of the put method in
 HRegion.java, it seems like the updates of a put operation are written
 to the WAL only for one column family at a time. Is this understanding
 correct, so would it be more correct to say that the writes are
 actually atomic per column family within a row?

 On a related note, it would be nice if one could do both put and
 delete operations on one row in an atomic manner.

 Thanks,

 Bruno





Re: FilterList and SingleColumnValueFilter

2009-12-15 Thread Ram Kulbak
Hi Paul,

I've encountered the same problem. I think its fixed as part of
https://issues.apache.org/jira/browse/HBASE-2037

Regards,
Yoram



On Wed, Dec 16, 2009 at 10:45 AM, Paul Ambrose pambr...@mac.com wrote:

 I ran into some problems with FilterList and SingleColumnValueFilter.

 I created a FilterList with MUST_PASS_ONE and two SingleColumnValueFilters
 (each testing equality on a different columns) and query some trivial data:

 http://pastie.org/744890

 The problem that I encountered were two-fold:

 SingleColumnValueFilter.filterKeyValues() returns ReturnCode.INCLUDE
 if the column names do not match. If FilterList is employed, then when the
 first Filter returns INCLUDE (because the column names do not match), no
 more filters for that KeyValue are evaluated.  That is problematic because
 when filterRow() is finally called for those filters, matchedColumn is
 never
 found to be true because they were not invoked (due to FilterList exiting
 from
 the filterList iteration when the name mismatched INCLUDE was returned).
 The fix (at least for this scenario) is for
 SingleColumnValueFilter.filterKeyValues() to
 return ReturnCode.NEXT_ROW (rather than INCLUDE).

 The second problem is at the bottom of FilterList.filterKeyValue()
 where ReturnCode.SKIP is returned if MUST_PASS_ONE is the operator,
 rather than always returning ReturnCode.INCLUDE and then leaving the
 final filter decision to be made by the call to filterRow().   I am sure
 there is a good
 reason for returning SKIP in other scenarios, but it is problematic in
 mine.

 Feedback would be much appreciated.

 Paul










hbase 0.20.0-alpha and zookeeper

2009-06-25 Thread Ram Kulbak
Hi,
I've noticed that hbase 0.20.0-alpha comes with a non official zookeeper jar
(zookeeper-r785019-hbase-1329.jar).
Can I deploy hbase 0.20.0-alpha with zookeeper 3.1.1 ?

Thanks,
Ram


hbase 0.20.0-alpha transactional and indexed region servers missing?

2009-06-25 Thread Ram Kulbak
Hi,
I can't find the classes TrasnactionalTable, IndexedTable or any of the
indexed or transactional functionality in hbase 0.20.0-alpha, is this a
mistake?

Thanks,
Ram


Re: Hadoop 0.20.0, xml parsing related error

2009-06-24 Thread Ram Kulbak
Hi,
The exception is a result of having xerces in the classpath. To resolve,
make sure you are using Java 6 and set the following system property:

-Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl


This can also be resolved by the Configuration class(line 1045) making sure
it loads the DocumentBuilderFactory bundled with the JVM and not a 'random'
classpath-dependent factory..
Hope this helps,
Ram



On Wed, Jun 24, 2009 at 6:42 PM, murali krishna muralikpb...@yahoo.comwrote:

 Hi,

 Recently migrated to hadoop-0.20.0 and I am facing
 https://issues.apache.org/jira/browse/HADOOP-5254

 Failed to set setXIncludeAware(true) for parser
 org.apache.xerces.jaxp.documentbuilderfactoryi...@1e9e5c73:java.lang.UnsupportedOperationException:
 This parser does not support specification null version null
 java.lang.UnsupportedOperationException: This parser does not support
 specification null version null
at
 javax..xml.parsers.DocumentBuilderFactory.setXIncludeAware(DocumentBuilderFactory..java:590)
at
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1053)


 So I tried including xml-apis-1.3.04.jar:xerces-2_9_1/xercesImpl.jar in
 hadoop classpath. But it started throwing another exception

 09/06/23 05:53:02 FATAL conf.Configuration: error parsing conf file:
 javax.xml.parsers.ParserConfigurationException: Feature '
 http://apache.org/xml/features/xinclude' is not recognized.
 Exception in thread main java.lang.RuntimeException:
 javax.xml.parsers.ParserConfigurationException: Feature '
 http://apache.org/xml/features/xinclude' is not recognized.
at
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1170)

 The former one was caught and WARN msg was logged by Configuration.java but
 later is uncaught and fails.

 What is the correct jar files / versions that should be included to avoid
 this?

 Thanks,
 Murali



Question: index package in contrib (lucene index)

2009-05-29 Thread Tenaali Ram
Anyone ?
Any help to understand this package is appreciated.

Thanks,
T

On Thu, May 28, 2009 at 3:18 PM, Tenaali Ram tenaali...@gmail.com wrote:

 Hi,

 I am trying to understand the code of index package to build a distributed
 Lucene index. I have some very basic questions and would really appreciate
 if someone can help me understand this code-

 1) If I already have Lucene index (divided into shards), should I upload
 these indexes into HDFS and provide its location or the code will pick these
 shards from local file system ?

 2) How is the code adding a document in the lucene index, I can see there
 is a index selection policy. Assuming round robin policy is chosen, how is
 the code adding a document in the lucene index? This is related to first
 question - is the index where the new document is to be added in HDFS or in
 local file system. I read in the README that the index is first created on
 local file system, then copied back to HDFS. Can someone please point me to
 the code that is doing this.

 3) After the map reduce job finishes, where are the final indexes ? In HDFS
 ?

 4) Correct me if I am wrong- the code builds multiple indexes, where each
 index is an instance of Lucene Index having a disjoint subset of documents
 from the corpus. So, if I have to search a term, I have to search each index
 and then merge the result. If this is correct, then how is the IDF of a term
 which is a global statistic computed and updated in each index ? I mean each
 index can compute the IDF wrt. to the subset of documents that it has, but
 can not compute the global IDF of a term (since it knows nothing about other
 indexes, which might have the same term in other documents).

 Thanks,
 -T





Re: Question: index package in contrib (lucene index)

2009-05-29 Thread Tenaali Ram
Thanks Jun!

On Fri, May 29, 2009 at 2:49 PM, Jun Rao jun...@almaden.ibm.com wrote:

 Reply inlined below.

 Jun
 IBM Almaden Research Center
 K55/B1, 650 Harry Road, San Jose, CA  95120-6099

 jun...@almaden.ibm.com


 Tenaali Ram tenaali...@gmail.com wrote on 05/28/2009 03:18:53 PM:

  Hi,
 
  I am trying to understand the code of index package to build a
 distributed
  Lucene index. I have some very basic questions and would really
 appreciate
  if someone can help me understand this code-
 
  1) If I already have Lucene index (divided into shards), should I upload
  these indexes into HDFS and provide its location or the code will pick
 these
  shards from local file system ?

 Yes, you need to put the old index to HDFS first.

 
  2) How is the code adding a document in the lucene index, I can see there
 is
  a index selection policy. Assuming round robin policy is chosen, how is
 the
  code adding a document in the lucene index? This is related to first
  question - is the index where the new document is to be added in HDFS or
 in
  local file system. I read in the README that the index is first created
 on
  local file system, then copied back to HDFS. Can someone please point me
 to
  the code that is doing this.
 

 See contrib.index.example.

  3) After the map reduce job finishes, where are the final indexes ? In
 HDFS
  ?

 They will be in HDFS.

 
  4) Correct me if I am wrong- the code builds multiple indexes, where each
  index is an instance of Lucene Index having a disjoint subset of
 documents
  from the corpus. So, if I have to search a term, I have to search each
 index
  and then merge the result. If this is correct, then how is the IDF of a
 term
  which is a global statistic computed and updated in each index ? I mean
 each
  index can compute the IDF wrt. to the subset of documents that it has,
 but
  can not compute the global IDF of a term (since it knows nothing about
 other
  indexes, which might have the same term in other documents).
 

 This package only deals with index builds. The shards are disjoint and it's
 up to the index server to calculate the ranks. For distributed TF/IDF
 support, you may want to look into Katta.

  Thanks,
  -T


Question: index package in contrib (lucene index)

2009-05-28 Thread Tenaali Ram
Hi,

I am trying to understand the code of index package to build a distributed
Lucene index. I have some very basic questions and would really appreciate
if someone can help me understand this code-

1) If I already have Lucene index (divided into shards), should I upload
these indexes into HDFS and provide its location or the code will pick these
shards from local file system ?

2) How is the code adding a document in the lucene index, I can see there is
a index selection policy. Assuming round robin policy is chosen, how is the
code adding a document in the lucene index? This is related to first
question - is the index where the new document is to be added in HDFS or in
local file system. I read in the README that the index is first created on
local file system, then copied back to HDFS. Can someone please point me to
the code that is doing this.

3) After the map reduce job finishes, where are the final indexes ? In HDFS
?

4) Correct me if I am wrong- the code builds multiple indexes, where each
index is an instance of Lucene Index having a disjoint subset of documents
from the corpus. So, if I have to search a term, I have to search each index
and then merge the result. If this is correct, then how is the IDF of a term
which is a global statistic computed and updated in each index ? I mean each
index can compute the IDF wrt. to the subset of documents that it has, but
can not compute the global IDF of a term (since it knows nothing about other
indexes, which might have the same term in other documents).

Thanks,
-T


Tips on sorting using Hadoop

2008-09-12 Thread Tenaali Ram
Hi,
I want to sort my records ( consisting of string, int, float) using Hadoop.

One way I have found is to set number of reducers = 1, but this would mean
all the records go to 1 reducer and it won't be optimized. Can anyone point
me to some better way to do sorting using Hadoop ?

Thanks,
Tenaali


Hadoop for computationally intensive tasks (no data)

2008-09-04 Thread Tenaali Ram
Hi,

I am new to hadoop. What I have understood so far is- hadoop is used to
process huge data using map-reduce paradigm.

I am working on problem where I need to perform large number of
computations, most computations can be done independently of each other (so
I think each mapper can handle one or more such computations). However there
is no data involved. Its just number crunching job. Is it suited for Hadoop
?

Has anyone used hadoop for merely number crunching? If yes, how should I
define input for the job and ensure that computations are distributed to all
nodes in the grid?

Thanks,
Tenaali