Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Eran Zinman
Hi,

Sorry to bother you guys again, but it seems that no matter what I do I
can't run the new version of Nutch with Hadoop 0.20.

I am getting the following exceptions in my logs when I execute
bin/start-all.sh

I don't know what to do! I've tried all kind of stuff but with no luck... :(

*hadoop-eran-jobtracker-master.log*
2009-12-09 12:04:53,965 FATAL mapred.JobTracker -
java.lang.SecurityException: sealing violation: can't seal package
org.mortbay.util: already loaded
at java.net.URLClassLoader.defineClass(URLClassLoader.java:235)
at java.net.URLClassLoader.access$000(URLClassLoader.java:56)
at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
at org.apache.hadoop.mapred.JobTracker.init(JobTracker.java:1610)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:180)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:172)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3699)

*hadoop-eran-namenode-master.log*
2009-12-09 12:04:27,583 ERROR namenode.NameNode -
java.lang.SecurityException: sealing violation: can't seal package
org.mortbay.util: already loaded
at java.net.URLClassLoader.defineClass(URLClassLoader.java:235)
at java.net.URLClassLoader.access$000(URLClassLoader.java:56)
at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:220)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

Thanks for trying to help,
Eran

On Sun, Dec 6, 2009 at 3:51 PM, Eran Zinman zze...@gmail.com wrote:

 Hi,

 Just upgraded to the latest version of Nutch with Hadoop 0.20.

 I'm getting the following exception in the namenode log and DFS doesn't
 start:

 2009-12-06 15:48:32,523 ERROR namenode.NameNode -
 java.lang.SecurityException: sealing violation: can't seal package
 org.mortbay.util: already loaded
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:235)
 at java.net.URLClassLoader.access$000(URLClassLoader.java:56)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
 at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:220)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:202)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:279)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

 Any help will be appreciated ... quite stuck with this.

 Thanks,
 Eran



Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Andrzej Bialecki

Eran Zinman wrote:

Hi,

Sorry to bother you guys again, but it seems that no matter what I do I
can't run the new version of Nutch with Hadoop 0.20.

I am getting the following exceptions in my logs when I execute
bin/start-all.sh


Do you use the scripts in place, i.e. without deploying the nutch*.job 
to a separate Hadoop cluster? Could you please try it with a standalone 
Hadoop cluster (even if it's a pseudo-distributed, i.e. single node)?



--
Best regards,
Andrzej Bialecki 
 ___. ___ ___ ___ _ _   __
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com



Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Eran Zinman
Hi Andrzej,

Thanks for your help (as always).

Still getting same exception when running on standalone Hadoop cluster.
Getting same exceptions as before -  also in the datanode log I'm getting:

2009-12-09 12:20:37,805 ERROR datanode.DataNode - java.io.IOException: Call
to 10.0.0.2:9000 failed on local exception: java.io.IOException: Connection
reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
at org.apache.hadoop.ipc.Client.call(Client.java:742)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:216)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
at sun.nio.ch.IOUtil.read(IOUtil.java:206)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)

Thanks,
Eran

On Wed, Dec 9, 2009 at 12:12 PM, Andrzej Bialecki a...@getopt.org wrote:

 Eran Zinman wrote:

 Hi,

 Sorry to bother you guys again, but it seems that no matter what I do I
 can't run the new version of Nutch with Hadoop 0.20.

 I am getting the following exceptions in my logs when I execute
 bin/start-all.sh


 Do you use the scripts in place, i.e. without deploying the nutch*.job to a
 separate Hadoop cluster? Could you please try it with a standalone Hadoop
 cluster (even if it's a pseudo-distributed, i.e. single node)?


 --
 Best regards,
 Andrzej Bialecki 
  ___. ___ ___ ___ _ _   __
 [__ || __|__/|__||\/|  Information Retrieval, Semantic Web
 ___|||__||  \|  ||  |  Embedded Unix, System Integration
 http://www.sigram.com  Contact: info at sigram dot com




Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Eran Zinman
Hi,

Running new Nutch version status:

1. Nutch runs perfectly if Hadoop is disabled (i.e. running in normal mode).
2. Nutch doesn't work when I setup it to work with Hadoop either in a single
or cluster setup.

*I'm getting an exception: *
ERROR namenode.NameNode - java.lang.SecurityException: sealing violation:
can't seal package org.mortbay.util: already loaded

I thought it might be a good idea that I'll attach my Hadoop conf files, so
here they are:

*core-site.xml*
configuration
property
  namefs.default.name/name
  valuehdfs://10.0.0.2:9000//value
  description
The name of the default file system. Either the literal string
local or a host:port for NDFS.
  /description
/property
/configuration

*mapred-site.xml*
configuration
property
  namemapred.job.tracker/name
  value10.0.0.2:9001/value
  description
The host and port that the MapReduce job tracker runs at. If
local, then jobs are run in-process as a single map and
reduce task.
  /description
/property

property
  namemapred.system.dir/name
  value/my_crawler/filesystem/mapreduce/system/value
/property

property
  namemapred.local.dir/name
  value/my_crawler/filesystem/mapreduce/local/value
/property
/configuration

*hdfs-site.xml*
configuration
property
  namedfs.name.dir/name
  value/my_crawler/filesystem/name/value
/property

property
  namedfs.data.dir/name
  value/my_crawler/filesystem/data/value
/property

property
  namedfs.replication/name
  value2/value
/property
/configuration

Thanks,
Eran

On Wed, Dec 9, 2009 at 12:22 PM, Eran Zinman zze...@gmail.com wrote:

 Hi Andrzej,

 Thanks for your help (as always).

 Still getting same exception when running on standalone Hadoop cluster.
 Getting same exceptions as before -  also in the datanode log I'm getting:

 2009-12-09 12:20:37,805 ERROR datanode.DataNode - java.io.IOException: Call
 to 10.0.0.2:9000 failed on local exception: java.io.IOException:
 Connection reset by peer
 at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
 at org.apache.hadoop.ipc.Client.call(Client.java:742)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
 at $Proxy4.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
 at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
 at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:216)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
 Caused by: java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcher.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
 at sun.nio.ch.IOUtil.read(IOUtil.java:206)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
 at
 org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
 at
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
 at
 org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.FilterInputStream.read(FilterInputStream.java:116)
 at
 org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
 at java.io.DataInputStream.readInt(DataInputStream.java:370)
 at
 org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)

 Thanks,
 Eran


 On Wed, Dec 9, 2009 at 12:12 PM, Andrzej Bialecki a...@getopt.org wrote:

 Eran Zinman wrote:

 Hi,

 Sorry to bother you guys again, but it seems that no matter what I do I
 can't run the new version of Nutch with Hadoop 0.20.

 I am getting the following exceptions in my logs when I execute
 bin/start-all.sh


 Do you use the scripts in place, i.e. without deploying the nutch*.job to
 a separate Hadoop cluster? Could you please try it with a standalone Hadoop
 cluster (even if it's a pseudo-distributed, i.e. single node)?


 --
 Best regards,
 Andrzej Bialecki 
  ___. ___ ___ ___ _ _   __
 [__ || __|__/|__||\/|  Information Retrieval, Semantic Web
 ___|||__||  \|  ||  |  Embedded Unix, System 

Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Dennis Kubes

1) Is this a new or existing Hadoop cluster?
2) What Java version are you using and what is your environment?

Dennis

Eran Zinman wrote:

Hi,

Running new Nutch version status:

1. Nutch runs perfectly if Hadoop is disabled (i.e. running in normal mode).
2. Nutch doesn't work when I setup it to work with Hadoop either in a single
or cluster setup.

*I'm getting an exception: *
ERROR namenode.NameNode - java.lang.SecurityException: sealing violation:
can't seal package org.mortbay.util: already loaded

I thought it might be a good idea that I'll attach my Hadoop conf files, so
here they are:

*core-site.xml*
configuration
property
  namefs.default.name/name
  valuehdfs://10.0.0.2:9000//value
  description
The name of the default file system. Either the literal string
local or a host:port for NDFS.
  /description
/property
/configuration

*mapred-site.xml*
configuration
property
  namemapred.job.tracker/name
  value10.0.0.2:9001/value
  description
The host and port that the MapReduce job tracker runs at. If
local, then jobs are run in-process as a single map and
reduce task.
  /description
/property

property
  namemapred.system.dir/name
  value/my_crawler/filesystem/mapreduce/system/value
/property

property
  namemapred.local.dir/name
  value/my_crawler/filesystem/mapreduce/local/value
/property
/configuration

*hdfs-site.xml*
configuration
property
  namedfs.name.dir/name
  value/my_crawler/filesystem/name/value
/property

property
  namedfs.data.dir/name
  value/my_crawler/filesystem/data/value
/property

property
  namedfs.replication/name
  value2/value
/property
/configuration

Thanks,
Eran

On Wed, Dec 9, 2009 at 12:22 PM, Eran Zinman zze...@gmail.com wrote:


Hi Andrzej,

Thanks for your help (as always).

Still getting same exception when running on standalone Hadoop cluster.
Getting same exceptions as before -  also in the datanode log I'm getting:

2009-12-09 12:20:37,805 ERROR datanode.DataNode - java.io.IOException: Call
to 10.0.0.2:9000 failed on local exception: java.io.IOException:
Connection reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
at org.apache.hadoop.ipc.Client.call(Client.java:742)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:216)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
at sun.nio.ch.IOUtil.read(IOUtil.java:206)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)

Thanks,
Eran


On Wed, Dec 9, 2009 at 12:12 PM, Andrzej Bialecki a...@getopt.org wrote:


Eran Zinman wrote:


Hi,

Sorry to bother you guys again, but it seems that no matter what I do I
can't run the new version of Nutch with Hadoop 0.20.

I am getting the following exceptions in my logs when I execute
bin/start-all.sh


Do you use the scripts in place, i.e. without deploying the nutch*.job to
a separate Hadoop cluster? Could you please try it with a standalone Hadoop
cluster (even if it's a pseudo-distributed, i.e. single node)?


--
Best regards,
Andrzej Bialecki 
 ___. ___ ___ ___ _ _   __
[__ || __|__/|__||\/|  Information 

Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Eran Zinman
Hi Dennis,

1) I've initially tried to run on my existing DFS and it didn't work. I then
made a backup of my DFS and performed a format and it still didn't work...

2) I'm using:

java version 1.6.0_0
OpenJDK Runtime Environment (IcedTea6 1.4.1) (6b14-1.4.1-0ubuntu12)
OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)

3) My environment variables:

ORBIT_SOCKETDIR=/tmp/orbit-eran
SSH_AGENT_PID=3533
GPG_AGENT_INFO=/tmp/seahorse-Gq6lRI/S.gpg-agent:3557:1
TERM=xterm
SHELL=/bin/bash
XDG_SESSION_COOKIE=1a02c2275727547fa7209ad54a91276c-1260199857.905267-2000911890
GTK_RC_FILES=/etc/gtk/gtkrc:/home/eran/.gtkrc-1.2-gnome2
WINDOWID=54653392
GTK_MODULES=canberra-gtk-module
USER=eran
LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.svgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:
GNOME_KEYRING_SOCKET=/tmp/keyring-0Vt0yu/socket
SSH_AUTH_SOCK=/tmp/keyring-0Vt0yu/socket.ssh
SESSION_MANAGER=local/eran:/tmp/.ICE-unix/3387
USERNAME=eran
DESKTOP_SESSION=default
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
GDM_XSERVER_LOCATION=local
PWD=/home/eran
JAVA_HOME=/usr/lib/jvm/default-java/
LANG=en_US.UTF-8
GDM_LANG=en_US.UTF-8
GDMSESSION=default
HISTCONTROL=ignoreboth
SHLVL=1
HOME=/home/eran
GNOME_DESKTOP_SESSION_ID=this-is-deprecated
LOGNAME=eran
XDG_DATA_DIRS=/usr/local/share/:/usr/share/:/usr/share/gdm/
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-E4IJ0hMrD8,guid=c3caaf3e590c65a58904ca7f4b1d1fb3
LESSOPEN=| /usr/bin/lesspipe %s
WINDOWPATH=7
DISPLAY=:0.0
LESSCLOSE=/usr/bin/lesspipe %s %s
XAUTHORITY=/home/eran/.Xauthority
COLORTERM=gnome-terminal
_=/usr/bin/printenv

Thanks,
Eran


On Wed, Dec 9, 2009 at 2:38 PM, Dennis Kubes ku...@apache.org wrote:

 1) Is this a new or existing Hadoop cluster?
 2) What Java version are you using and what is your environment?

 Dennis


 Eran Zinman wrote:

 Hi,

 Running new Nutch version status:

 1. Nutch runs perfectly if Hadoop is disabled (i.e. running in normal
 mode).
 2. Nutch doesn't work when I setup it to work with Hadoop either in a
 single
 or cluster setup.

 *I'm getting an exception: *
 ERROR namenode.NameNode - java.lang.SecurityException: sealing violation:
 can't seal package org.mortbay.util: already loaded

 I thought it might be a good idea that I'll attach my Hadoop conf files,
 so
 here they are:

 *core-site.xml*
 configuration
 property
  namefs.default.name/name
  valuehdfs://10.0.0.2:9000//value
  description
The name of the default file system. Either the literal string
local or a host:port for NDFS.
  /description
 /property
 /configuration

 *mapred-site.xml*
 configuration
 property
  namemapred.job.tracker/name
  value10.0.0.2:9001/value
  description
The host and port that the MapReduce job tracker runs at. If
local, then jobs are run in-process as a single map and
reduce task.
  /description
 /property

 property
  namemapred.system.dir/name
  value/my_crawler/filesystem/mapreduce/system/value
 /property

 property
  namemapred.local.dir/name
  value/my_crawler/filesystem/mapreduce/local/value
 /property
 /configuration

 *hdfs-site.xml*
 configuration
 property
  namedfs.name.dir/name
  value/my_crawler/filesystem/name/value
 /property

 property
  namedfs.data.dir/name
  value/my_crawler/filesystem/data/value
 /property

 property
  namedfs.replication/name
  value2/value
 /property
 /configuration

 Thanks,
 Eran

 On Wed, Dec 9, 2009 at 12:22 PM, Eran Zinman zze...@gmail.com wrote:

  Hi Andrzej,

 Thanks for your help (as always).

 Still getting same exception when running on standalone Hadoop cluster.
 Getting same exceptions as before -  also in the datanode log I'm
 getting:

 2009-12-09 12:20:37,805 ERROR datanode.DataNode - java.io.IOException:
 Call
 to 10.0.0.2:9000 failed on local exception: java.io.IOException:
 Connection reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
at org.apache.hadoop.ipc.Client.call(Client.java:742)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at 

Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Dennis Kubes
Did you do a fresh install of Nutch with Hadoop 0.20 or did you just 
copy over the new jars?  The sealing violation is multiple of the same 
jars being loaded and the Jetty versions changed between 0.19 and 0.20 
for Hadoop?


Dennis

Eran Zinman wrote:

Hi Dennis,

1) I've initially tried to run on my existing DFS and it didn't work. I then
made a backup of my DFS and performed a format and it still didn't work...

2) I'm using:

java version 1.6.0_0
OpenJDK Runtime Environment (IcedTea6 1.4.1) (6b14-1.4.1-0ubuntu12)
OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)

3) My environment variables:

ORBIT_SOCKETDIR=/tmp/orbit-eran
SSH_AGENT_PID=3533
GPG_AGENT_INFO=/tmp/seahorse-Gq6lRI/S.gpg-agent:3557:1
TERM=xterm
SHELL=/bin/bash
XDG_SESSION_COOKIE=1a02c2275727547fa7209ad54a91276c-1260199857.905267-2000911890
GTK_RC_FILES=/etc/gtk/gtkrc:/home/eran/.gtkrc-1.2-gnome2
WINDOWID=54653392
GTK_MODULES=canberra-gtk-module
USER=eran
LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.svgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=0

0;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:

GNOME_KEYRING_SOCKET=/tmp/keyring-0Vt0yu/socket
SSH_AUTH_SOCK=/tmp/keyring-0Vt0yu/socket.ssh
SESSION_MANAGER=local/eran:/tmp/.ICE-unix/3387
USERNAME=eran
DESKTOP_SESSION=default
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
GDM_XSERVER_LOCATION=local
PWD=/home/eran
JAVA_HOME=/usr/lib/jvm/default-java/
LANG=en_US.UTF-8
GDM_LANG=en_US.UTF-8
GDMSESSION=default
HISTCONTROL=ignoreboth
SHLVL=1
HOME=/home/eran
GNOME_DESKTOP_SESSION_ID=this-is-deprecated
LOGNAME=eran
XDG_DATA_DIRS=/usr/local/share/:/usr/share/:/usr/share/gdm/
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-E4IJ0hMrD8,guid=c3caaf3e590c65a58904ca7f4b1d1fb3
LESSOPEN=| /usr/bin/lesspipe %s
WINDOWPATH=7
DISPLAY=:0.0
LESSCLOSE=/usr/bin/lesspipe %s %s
XAUTHORITY=/home/eran/.Xauthority
COLORTERM=gnome-terminal
_=/usr/bin/printenv

Thanks,
Eran


On Wed, Dec 9, 2009 at 2:38 PM, Dennis Kubes ku...@apache.org wrote:


1) Is this a new or existing Hadoop cluster?
2) What Java version are you using and what is your environment?

Dennis


Eran Zinman wrote:


Hi,

Running new Nutch version status:

1. Nutch runs perfectly if Hadoop is disabled (i.e. running in normal
mode).
2. Nutch doesn't work when I setup it to work with Hadoop either in a
single
or cluster setup.

*I'm getting an exception: *
ERROR namenode.NameNode - java.lang.SecurityException: sealing violation:
can't seal package org.mortbay.util: already loaded

I thought it might be a good idea that I'll attach my Hadoop conf files,
so
here they are:

*core-site.xml*
configuration
property
 namefs.default.name/name
 valuehdfs://10.0.0.2:9000//value
 description
   The name of the default file system. Either the literal string
   local or a host:port for NDFS.
 /description
/property
/configuration

*mapred-site.xml*
configuration
property
 namemapred.job.tracker/name
 value10.0.0.2:9001/value
 description
   The host and port that the MapReduce job tracker runs at. If
   local, then jobs are run in-process as a single map and
   reduce task.
 /description
/property

property
 namemapred.system.dir/name
 value/my_crawler/filesystem/mapreduce/system/value
/property

property
 namemapred.local.dir/name
 value/my_crawler/filesystem/mapreduce/local/value
/property
/configuration

*hdfs-site.xml*
configuration
property
 namedfs.name.dir/name
 value/my_crawler/filesystem/name/value
/property

property
 namedfs.data.dir/name
 value/my_crawler/filesystem/data/value
/property

property
 namedfs.replication/name
 value2/value
/property
/configuration

Thanks,
Eran

On Wed, Dec 9, 2009 at 12:22 PM, Eran Zinman zze...@gmail.com wrote:

 Hi Andrzej,

Thanks for your help (as always).

Still getting same exception when running on standalone Hadoop cluster.
Getting same exceptions as before -  also in the datanode log I'm
getting:

2009-12-09 12:20:37,805 ERROR datanode.DataNode - java.io.IOException:
Call
to 10.0.0.2:9000 failed on local exception: java.io.IOException:
Connection reset by peer
   at 

Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Eran Zinman
Hi Dennis,

Thanks for trying to help.

I don't know what fresh install means exactly.

Here is what I've done:
1) Downloaded latest version of Nutch from the SVN to a new folder.
2) Copied all the custom plugins I've written to the new folder
3) Edited all configuration files.
4) Executed ant package.
5) Run the new Nutch... and got this error.

What did I miss?

Thanks,
Eran

On Wed, Dec 9, 2009 at 3:36 PM, Dennis Kubes ku...@apache.org wrote:

 Did you do a fresh install of Nutch with Hadoop 0.20 or did you just copy
 over the new jars?  The sealing violation is multiple of the same jars being
 loaded and the Jetty versions changed between 0.19 and 0.20 for Hadoop?

 Dennis


 Eran Zinman wrote:

 Hi Dennis,

 1) I've initially tried to run on my existing DFS and it didn't work. I
 then
 made a backup of my DFS and performed a format and it still didn't work...

 2) I'm using:

 java version 1.6.0_0
 OpenJDK Runtime Environment (IcedTea6 1.4.1) (6b14-1.4.1-0ubuntu12)
 OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)

 3) My environment variables:

 ORBIT_SOCKETDIR=/tmp/orbit-eran
 SSH_AGENT_PID=3533
 GPG_AGENT_INFO=/tmp/seahorse-Gq6lRI/S.gpg-agent:3557:1
 TERM=xterm
 SHELL=/bin/bash

 XDG_SESSION_COOKIE=1a02c2275727547fa7209ad54a91276c-1260199857.905267-2000911890
 GTK_RC_FILES=/etc/gtk/gtkrc:/home/eran/.gtkrc-1.2-gnome2
 WINDOWID=54653392
 GTK_MODULES=canberra-gtk-module
 USER=eran

 LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.svgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=0


 0;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:

 GNOME_KEYRING_SOCKET=/tmp/keyring-0Vt0yu/socket
 SSH_AUTH_SOCK=/tmp/keyring-0Vt0yu/socket.ssh
 SESSION_MANAGER=local/eran:/tmp/.ICE-unix/3387
 USERNAME=eran
 DESKTOP_SESSION=default

 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
 GDM_XSERVER_LOCATION=local
 PWD=/home/eran
 JAVA_HOME=/usr/lib/jvm/default-java/
 LANG=en_US.UTF-8
 GDM_LANG=en_US.UTF-8
 GDMSESSION=default
 HISTCONTROL=ignoreboth
 SHLVL=1
 HOME=/home/eran
 GNOME_DESKTOP_SESSION_ID=this-is-deprecated
 LOGNAME=eran
 XDG_DATA_DIRS=/usr/local/share/:/usr/share/:/usr/share/gdm/

 DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-E4IJ0hMrD8,guid=c3caaf3e590c65a58904ca7f4b1d1fb3
 LESSOPEN=| /usr/bin/lesspipe %s
 WINDOWPATH=7
 DISPLAY=:0.0
 LESSCLOSE=/usr/bin/lesspipe %s %s
 XAUTHORITY=/home/eran/.Xauthority
 COLORTERM=gnome-terminal
 _=/usr/bin/printenv

 Thanks,
 Eran


 On Wed, Dec 9, 2009 at 2:38 PM, Dennis Kubes ku...@apache.org wrote:

  1) Is this a new or existing Hadoop cluster?
 2) What Java version are you using and what is your environment?

 Dennis


 Eran Zinman wrote:

  Hi,

 Running new Nutch version status:

 1. Nutch runs perfectly if Hadoop is disabled (i.e. running in normal
 mode).
 2. Nutch doesn't work when I setup it to work with Hadoop either in a
 single
 or cluster setup.

 *I'm getting an exception: *
 ERROR namenode.NameNode - java.lang.SecurityException: sealing
 violation:
 can't seal package org.mortbay.util: already loaded

 I thought it might be a good idea that I'll attach my Hadoop conf files,
 so
 here they are:

 *core-site.xml*
 configuration
 property
  namefs.default.name/name
  valuehdfs://10.0.0.2:9000//value
  description
   The name of the default file system. Either the literal string
   local or a host:port for NDFS.
  /description
 /property
 /configuration

 *mapred-site.xml*
 configuration
 property
  namemapred.job.tracker/name
  value10.0.0.2:9001/value
  description
   The host and port that the MapReduce job tracker runs at. If
   local, then jobs are run in-process as a single map and
   reduce task.
  /description
 /property

 property
  namemapred.system.dir/name
  value/my_crawler/filesystem/mapreduce/system/value
 /property

 property
  namemapred.local.dir/name
  value/my_crawler/filesystem/mapreduce/local/value
 /property
 /configuration

 *hdfs-site.xml*
 configuration
 property
  namedfs.name.dir/name
  value/my_crawler/filesystem/name/value
 /property

 property
  namedfs.data.dir/name
  

Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Eran Zinman
Hi all,

thanks Dennis - you helped me solve the problem.

The problem was that I had two versions of jetty in my lib folder.

I deleted the old version and viola - it works.

The problem is that both versions exist in the SVN! Altough I took a fresh
copy of the SVN I had both versions in my lib folder. I think we need to
remove the old version from the SVN so people like me won't get confused ...

Thanks !
Eran.

On Wed, Dec 9, 2009 at 4:10 PM, Eran Zinman zze...@gmail.com wrote:

 Hi Dennis,

 Thanks for trying to help.

 I don't know what fresh install means exactly.

 Here is what I've done:
 1) Downloaded latest version of Nutch from the SVN to a new folder.
 2) Copied all the custom plugins I've written to the new folder
 3) Edited all configuration files.
 4) Executed ant package.
 5) Run the new Nutch... and got this error.

 What did I miss?

 Thanks,
 Eran


 On Wed, Dec 9, 2009 at 3:36 PM, Dennis Kubes ku...@apache.org wrote:

 Did you do a fresh install of Nutch with Hadoop 0.20 or did you just copy
 over the new jars?  The sealing violation is multiple of the same jars being
 loaded and the Jetty versions changed between 0.19 and 0.20 for Hadoop?

 Dennis


 Eran Zinman wrote:

 Hi Dennis,

 1) I've initially tried to run on my existing DFS and it didn't work. I
 then
 made a backup of my DFS and performed a format and it still didn't
 work...

 2) I'm using:

 java version 1.6.0_0
 OpenJDK Runtime Environment (IcedTea6 1.4.1) (6b14-1.4.1-0ubuntu12)
 OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)

 3) My environment variables:

 ORBIT_SOCKETDIR=/tmp/orbit-eran
 SSH_AGENT_PID=3533
 GPG_AGENT_INFO=/tmp/seahorse-Gq6lRI/S.gpg-agent:3557:1
 TERM=xterm
 SHELL=/bin/bash

 XDG_SESSION_COOKIE=1a02c2275727547fa7209ad54a91276c-1260199857.905267-2000911890
 GTK_RC_FILES=/etc/gtk/gtkrc:/home/eran/.gtkrc-1.2-gnome2
 WINDOWID=54653392
 GTK_MODULES=canberra-gtk-module
 USER=eran

 LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.svgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=0


 0;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:

 GNOME_KEYRING_SOCKET=/tmp/keyring-0Vt0yu/socket
 SSH_AUTH_SOCK=/tmp/keyring-0Vt0yu/socket.ssh
 SESSION_MANAGER=local/eran:/tmp/.ICE-unix/3387
 USERNAME=eran
 DESKTOP_SESSION=default

 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
 GDM_XSERVER_LOCATION=local
 PWD=/home/eran
 JAVA_HOME=/usr/lib/jvm/default-java/
 LANG=en_US.UTF-8
 GDM_LANG=en_US.UTF-8
 GDMSESSION=default
 HISTCONTROL=ignoreboth
 SHLVL=1
 HOME=/home/eran
 GNOME_DESKTOP_SESSION_ID=this-is-deprecated
 LOGNAME=eran
 XDG_DATA_DIRS=/usr/local/share/:/usr/share/:/usr/share/gdm/

 DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-E4IJ0hMrD8,guid=c3caaf3e590c65a58904ca7f4b1d1fb3
 LESSOPEN=| /usr/bin/lesspipe %s
 WINDOWPATH=7
 DISPLAY=:0.0
 LESSCLOSE=/usr/bin/lesspipe %s %s
 XAUTHORITY=/home/eran/.Xauthority
 COLORTERM=gnome-terminal
 _=/usr/bin/printenv

 Thanks,
 Eran


 On Wed, Dec 9, 2009 at 2:38 PM, Dennis Kubes ku...@apache.org wrote:

  1) Is this a new or existing Hadoop cluster?
 2) What Java version are you using and what is your environment?

 Dennis


 Eran Zinman wrote:

  Hi,

 Running new Nutch version status:

 1. Nutch runs perfectly if Hadoop is disabled (i.e. running in normal
 mode).
 2. Nutch doesn't work when I setup it to work with Hadoop either in a
 single
 or cluster setup.

 *I'm getting an exception: *
 ERROR namenode.NameNode - java.lang.SecurityException: sealing
 violation:
 can't seal package org.mortbay.util: already loaded

 I thought it might be a good idea that I'll attach my Hadoop conf
 files,
 so
 here they are:

 *core-site.xml*
 configuration
 property
  namefs.default.name/name
  valuehdfs://10.0.0.2:9000//value
  description
   The name of the default file system. Either the literal string
   local or a host:port for NDFS.
  /description
 /property
 /configuration

 *mapred-site.xml*
 configuration
 property
  namemapred.job.tracker/name
  value10.0.0.2:9001/value
  description
   The host and port that the MapReduce job tracker runs at. If
   

Re: Nutch Hadoop 0.20 - Exception

2009-12-09 Thread Dennis Kubes
Done.  I have removed the old Jetty jars from the SVN.  Thanks for 
bringing this issue forward.


Dennis

Eran Zinman wrote:

Hi all,

thanks Dennis - you helped me solve the problem.

The problem was that I had two versions of jetty in my lib folder.

I deleted the old version and viola - it works.

The problem is that both versions exist in the SVN! Altough I took a fresh
copy of the SVN I had both versions in my lib folder. I think we need to
remove the old version from the SVN so people like me won't get confused ...

Thanks !
Eran.

On Wed, Dec 9, 2009 at 4:10 PM, Eran Zinman zze...@gmail.com wrote:


Hi Dennis,

Thanks for trying to help.

I don't know what fresh install means exactly.

Here is what I've done:
1) Downloaded latest version of Nutch from the SVN to a new folder.
2) Copied all the custom plugins I've written to the new folder
3) Edited all configuration files.
4) Executed ant package.
5) Run the new Nutch... and got this error.

What did I miss?

Thanks,
Eran


On Wed, Dec 9, 2009 at 3:36 PM, Dennis Kubes ku...@apache.org wrote:


Did you do a fresh install of Nutch with Hadoop 0.20 or did you just copy
over the new jars?  The sealing violation is multiple of the same jars being
loaded and the Jetty versions changed between 0.19 and 0.20 for Hadoop?

Dennis


Eran Zinman wrote:


Hi Dennis,

1) I've initially tried to run on my existing DFS and it didn't work. I
then
made a backup of my DFS and performed a format and it still didn't
work...

2) I'm using:

java version 1.6.0_0
OpenJDK Runtime Environment (IcedTea6 1.4.1) (6b14-1.4.1-0ubuntu12)
OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)

3) My environment variables:

ORBIT_SOCKETDIR=/tmp/orbit-eran
SSH_AGENT_PID=3533
GPG_AGENT_INFO=/tmp/seahorse-Gq6lRI/S.gpg-agent:3557:1
TERM=xterm
SHELL=/bin/bash

XDG_SESSION_COOKIE=1a02c2275727547fa7209ad54a91276c-1260199857.905267-2000911890
GTK_RC_FILES=/etc/gtk/gtkrc:/home/eran/.gtkrc-1.2-gnome2
WINDOWID=54653392
GTK_MODULES=canberra-gtk-module
USER=eran

LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.svgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.mid

i=0



0;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:


GNOME_KEYRING_SOCKET=/tmp/keyring-0Vt0yu/socket
SSH_AUTH_SOCK=/tmp/keyring-0Vt0yu/socket.ssh
SESSION_MANAGER=local/eran:/tmp/.ICE-unix/3387
USERNAME=eran
DESKTOP_SESSION=default

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
GDM_XSERVER_LOCATION=local
PWD=/home/eran
JAVA_HOME=/usr/lib/jvm/default-java/
LANG=en_US.UTF-8
GDM_LANG=en_US.UTF-8
GDMSESSION=default
HISTCONTROL=ignoreboth
SHLVL=1
HOME=/home/eran
GNOME_DESKTOP_SESSION_ID=this-is-deprecated
LOGNAME=eran
XDG_DATA_DIRS=/usr/local/share/:/usr/share/:/usr/share/gdm/

DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-E4IJ0hMrD8,guid=c3caaf3e590c65a58904ca7f4b1d1fb3
LESSOPEN=| /usr/bin/lesspipe %s
WINDOWPATH=7
DISPLAY=:0.0
LESSCLOSE=/usr/bin/lesspipe %s %s
XAUTHORITY=/home/eran/.Xauthority
COLORTERM=gnome-terminal
_=/usr/bin/printenv

Thanks,
Eran


On Wed, Dec 9, 2009 at 2:38 PM, Dennis Kubes ku...@apache.org wrote:

 1) Is this a new or existing Hadoop cluster?

2) What Java version are you using and what is your environment?

Dennis


Eran Zinman wrote:

 Hi,

Running new Nutch version status:

1. Nutch runs perfectly if Hadoop is disabled (i.e. running in normal
mode).
2. Nutch doesn't work when I setup it to work with Hadoop either in a
single
or cluster setup.

*I'm getting an exception: *
ERROR namenode.NameNode - java.lang.SecurityException: sealing
violation:
can't seal package org.mortbay.util: already loaded

I thought it might be a good idea that I'll attach my Hadoop conf
files,
so
here they are:

*core-site.xml*
configuration
property
 namefs.default.name/name
 valuehdfs://10.0.0.2:9000//value
 description
  The name of the default file system. Either the literal string
  local or a host:port for NDFS.
 /description
/property
/configuration

*mapred-site.xml*
configuration
property
 namemapred.job.tracker/name
 value10.0.0.2:9001/value
 description
  The host and port that the MapReduce