Hi,

Yes, I now tried it on one of our Linux servers in local mode with a larger file. Well, it starts out fine as you can see from the attached logs but at some point crashes...
What am I - again - doing wrong now? ;-)
(BTW: If you need the test data to evaluate my issue, I can certainly provide you with them.)

test class log:
-------------

07/11/08 18:20:15 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 07/11/08 18:20:15 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 07/11/08 18:20:15 INFO mapred.FileInputFormat: Total input paths to process : 1
07/11/08 18:20:16 INFO mapred.JobClient: Running job: job_local_1
07/11/08 18:20:16 INFO mapred.MapTask: numReduceTasks: 1
07/11/08 18:20:17 INFO mapred.JobClient:  map 0% reduce 0%
07/11/08 18:20:19 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:0+33554432
07/11/08 18:20:20 INFO mapred.JobClient:  map 26% reduce 0%
07/11/08 18:20:22 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:0+33554432
07/11/08 18:20:22 INFO mapred.JobClient:  map 69% reduce 0%
07/11/08 18:20:25 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:0+33554432
07/11/08 18:20:25 INFO mapred.JobClient:  map 100% reduce 0%
07/11/08 18:20:28 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:0+33554432 07/11/08 18:20:28 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:0+33554432
07/11/08 18:20:28 INFO mapred.TaskRunner: Task 'map_0000' done.
07/11/08 18:20:28 INFO mapred.MapTask: numReduceTasks: 1
07/11/08 18:20:31 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:0+33554432
07/11/08 18:20:31 INFO mapred.JobClient:  map 50% reduce 0%
07/11/08 18:20:31 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:33554432+33554432
07/11/08 18:20:32 INFO mapred.JobClient:  map 69% reduce 0%
07/11/08 18:20:34 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:33554432+33554432
07/11/08 18:20:34 INFO mapred.JobClient:  map 83% reduce 0%
07/11/08 18:20:37 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:33554432+33554432
07/11/08 18:20:37 INFO mapred.JobClient:  map 100% reduce 0%
07/11/08 18:20:40 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:33554432+33554432
07/11/08 18:20:40 INFO mapred.TaskRunner: Task 'map_0001' done.
07/11/08 18:20:40 INFO mapred.MapTask: numReduceTasks: 1
07/11/08 18:20:40 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:33554432+33554432
07/11/08 18:20:40 INFO mapred.JobClient:  map 66% reduce 0%
07/11/08 18:20:43 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:67108864+33554432
07/11/08 18:20:43 INFO mapred.JobClient:  map 80% reduce 0%
07/11/08 18:20:46 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:67108864+33554432
07/11/08 18:20:46 INFO mapred.JobClient:  map 95% reduce 0%
07/11/08 18:20:49 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:67108864+33554432
07/11/08 18:20:49 INFO mapred.JobClient:  map 100% reduce 0%
07/11/08 18:20:50 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:67108864+33554432
07/11/08 18:20:50 INFO mapred.TaskRunner: Task 'map_0002' done.
07/11/08 18:20:50 INFO mapred.MapTask: numReduceTasks: 1
07/11/08 18:20:52 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:67108864+33554432
07/11/08 18:20:52 INFO mapred.JobClient:  map 75% reduce 0%
07/11/08 18:20:53 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:100663296+33554432
07/11/08 18:20:53 INFO mapred.JobClient:  map 85% reduce 0%
07/11/08 18:20:56 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:100663296+33554432
07/11/08 18:20:56 INFO mapred.JobClient:  map 95% reduce 0%
07/11/08 18:20:59 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:100663296+33554432
07/11/08 18:20:59 INFO mapred.JobClient:  map 100% reduce 0%
07/11/08 18:21:01 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:100663296+33554432
07/11/08 18:21:01 INFO mapred.TaskRunner: Task 'map_0003' done.
07/11/08 18:21:01 INFO mapred.MapTask: numReduceTasks: 1
07/11/08 18:21:02 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:100663296+33554432
07/11/08 18:21:03 INFO mapred.JobClient:  map 80% reduce 0%
07/11/08 18:21:04 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:134217728+36595463
07/11/08 18:21:05 INFO mapred.JobClient:  map 87% reduce 0%
07/11/08 18:21:07 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:134217728+36595463
07/11/08 18:21:08 INFO mapred.JobClient:  map 94% reduce 0%
07/11/08 18:21:10 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:134217728+36595463
07/11/08 18:21:11 INFO mapred.JobClient:  map 99% reduce 0%
07/11/08 18:21:13 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:134217728+36595463
07/11/08 18:21:13 INFO mapred.JobClient:  map 100% reduce 0%
07/11/08 18:21:15 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:134217728+36595463
07/11/08 18:21:15 INFO mapred.TaskRunner: Task 'map_0004' done.
07/11/08 18:21:16 INFO mapred.LocalJobRunner: file:/home/holger/data/input/test:134217728+36595463
07/11/08 18:21:26 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:26 INFO mapred.JobClient:  map 100% reduce 66%
07/11/08 18:21:29 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:29 INFO mapred.JobClient:  map 100% reduce 67%
07/11/08 18:21:32 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:35 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:38 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:38 INFO mapred.JobClient:  map 100% reduce 68%
07/11/08 18:21:41 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:44 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:44 INFO mapred.JobClient:  map 100% reduce 69%
07/11/08 18:21:47 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:50 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:51 INFO mapred.JobClient:  map 100% reduce 70%
07/11/08 18:21:53 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:56 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:21:57 INFO mapred.JobClient:  map 100% reduce 71%
07/11/08 18:21:59 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:02 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:03 INFO mapred.JobClient:  map 100% reduce 72%
07/11/08 18:22:05 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:08 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:11 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:11 INFO mapred.JobClient:  map 100% reduce 73%
07/11/08 18:22:14 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:17 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:17 INFO mapred.JobClient:  map 100% reduce 74%
07/11/08 18:22:20 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:23 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:23 INFO mapred.JobClient:  map 100% reduce 75%
07/11/08 18:22:26 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:29 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:29 INFO mapred.JobClient:  map 100% reduce 76%
07/11/08 18:22:32 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:35 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:38 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:41 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:42 INFO mapred.JobClient:  map 100% reduce 77%
07/11/08 18:22:44 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:22:47 INFO mapred.LocalJobRunner: reduce > reduce
07/11/08 18:23:14 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 1 time(s). 07/11/08 18:23:15 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 2 time(s). 07/11/08 18:23:16 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 3 time(s). 07/11/08 18:23:17 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 4 time(s). 07/11/08 18:23:18 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 5 time(s). 07/11/08 18:23:19 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 6 time(s). 07/11/08 18:23:20 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 7 time(s). 07/11/08 18:23:21 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 8 time(s). 07/11/08 18:23:22 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 9 time(s). 07/11/08 18:23:23 INFO ipc.Client: Retrying connect to server: /127.0.1.1:60020. Already tried 10 time(s).

region server log:
-----------------

2007-11-08 18:19:50,683 INFO org.apache.hadoop.hbase.HRegionServer: Root dir: /home/holger/tmp/hbase 2007-11-08 18:19:50,799 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2007-11-08 18:19:50,800 INFO org.apache.hadoop.hbase.HLog: new log writer created at /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.000 2007-11-08 18:19:50,878 INFO org.mortbay.util.Credential: Checking Resource aliases 2007-11-08 18:19:50,924 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4 2007-11-08 18:19:50,924 INFO org.mortbay.util.Container: Started HttpContext[/static,/static] 2007-11-08 18:19:50,925 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs] 2007-11-08 18:19:51,731 INFO org.mortbay.util.Container: Started [EMAIL PROTECTED] 2007-11-08 18:19:51,871 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/] 2007-11-08 18:19:51,873 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:60030 2007-11-08 18:19:51,873 INFO org.mortbay.util.Container: Started [EMAIL PROTECTED] 2007-11-08 18:19:51,875 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 60020: starting 2007-11-08 18:19:51,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 60020: starting 2007-11-08 18:19:51,876 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 60020: starting 2007-11-08 18:19:51,876 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 60020: starting 2007-11-08 18:19:51,876 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 60020: starting 2007-11-08 18:19:51,876 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 60020: starting 2007-11-08 18:19:51,876 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 60020: starting 2007-11-08 18:19:51,876 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 60020: starting 2007-11-08 18:19:51,877 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 60020: starting 2007-11-08 18:19:51,877 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 60020: starting 2007-11-08 18:19:51,877 INFO org.apache.hadoop.hbase.HRegionServer: HRegionServer started at: 127.0.1.1:60020 2007-11-08 18:19:51,884 INFO org.apache.hadoop.hbase.HRegionServer: MSG_REGION_OPEN : regionname: -ROOT-,,0, startKey: <>, tableDesc: {name: -ROOT-, families: {info:={name: info, max versions: 1, compression: NONE, in memory: false, max length: 2147483647, bloom filter: none}}} 2007-11-08 18:19:51,974 INFO org.apache.hadoop.hbase.HRegion: region -ROOT-,,0 available 2007-11-08 18:19:51,975 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 60020: starting 2007-11-08 18:19:57,934 INFO org.apache.hadoop.hbase.HRegionServer: MSG_REGION_OPEN : regionname: .META.,,1, startKey: <>, tableDesc: {name: .META., families: {info:={name: info, max versions: 1, compression: NONE, in memory: false, max length: 2147483647, bloom filter: none}}} 2007-11-08 18:19:57,961 INFO org.apache.hadoop.hbase.HRegion: region .META.,,1 available 2007-11-08 18:20:06,994 INFO org.apache.hadoop.hbase.HRegionServer: MSG_REGION_OPEN : regionname: triples,,-7804573999218539490, startKey: <>, tableDesc: {name: triples, families: {triples:={name: triples, max versions: 3, compression: NONE, in memory: false, max length: 2147483647, bloom filter: none}}} 2007-11-08 18:20:06,997 INFO org.apache.hadoop.hbase.HRegion: region triples,,-7804573999218539490 available 2007-11-08 18:21:41,152 INFO org.apache.hadoop.hbase.HRegionServer: Rolling hlog. Number of entries: 50675 2007-11-08 18:21:41,751 INFO org.apache.hadoop.hbase.HLog: new log writer created at /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.001 2007-11-08 18:21:51,095 INFO org.apache.hadoop.hbase.HRegion: Optional flush called 10 times when data present without flushing. Forcing one. 2007-11-08 18:21:51,151 INFO org.apache.hadoop.hbase.HRegion: Optional flush called 10 times when data present without flushing. Forcing one. 2007-11-08 18:21:51,172 INFO org.apache.hadoop.hbase.HRegionServer: Rolling hlog. Number of entries: 35719 2007-11-08 18:21:51,464 INFO org.apache.hadoop.hbase.HLog: new log writer created at /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.002 2007-11-08 18:21:51,464 INFO org.apache.hadoop.hbase.HLog: removing old log file /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.000 whose highest sequence/edit id is 50677 2007-11-08 18:21:51,470 INFO org.apache.hadoop.hbase.HLog: removing old log file /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.001 whose highest sequence/edit id is 86397 2007-11-08 18:22:01,192 INFO org.apache.hadoop.hbase.HRegionServer: Rolling hlog. Number of entries: 35877 2007-11-08 18:22:01,403 INFO org.apache.hadoop.hbase.HLog: new log writer created at /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.003 2007-11-08 18:22:01,403 INFO org.apache.hadoop.hbase.HLog: removing old log file /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.002 whose highest sequence/edit id is 122275 2007-11-08 18:22:05,985 INFO org.apache.hadoop.hbase.HRegion: starting compaction on region triples,,-7804573999218539490 2007-11-08 18:22:11,213 INFO org.apache.hadoop.hbase.HRegionServer: Rolling hlog. Number of entries: 33012 2007-11-08 18:22:11,215 INFO org.apache.hadoop.hbase.HLog: new log writer created at /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.004 2007-11-08 18:22:15,925 INFO org.apache.hadoop.hbase.HRegion: compaction completed on region triples,,-7804573999218539490. Took 9sec 2007-11-08 18:22:21,233 INFO org.apache.hadoop.hbase.HRegionServer: Rolling hlog. Number of entries: 30516 2007-11-08 18:22:21,234 INFO org.apache.hadoop.hbase.HLog: new log writer created at /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.005 2007-11-08 18:22:21,235 INFO org.apache.hadoop.hbase.HLog: removing old log file /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.003 whose highest sequence/edit id is 155287 2007-11-08 18:22:31,262 INFO org.apache.hadoop.hbase.HRegionServer: Rolling hlog. Number of entries: 37779 2007-11-08 18:22:31,264 INFO org.apache.hadoop.hbase.HLog: new log writer created at /home/holger/tmp/hbase/log_127.0.1.1_8425543574119290313_60020/hlog.dat.006 2007-11-08 18:22:31,717 INFO org.apache.hadoop.hbase.HRegion: Blocking updates for 'IPC Server handler 1 on 60020': Memcache size 32.0m is >= than blocking 32.0m size 2007-11-08 18:22:33,735 INFO org.apache.hadoop.hbase.HRegion: Unblocking updates for region triples,,-7804573999218539490 'IPC Server handler 1 on 60020' 2007-11-08 18:22:36,024 INFO org.apache.hadoop.hbase.HRegion: starting compaction on region triples,,-7804573999218539490 2007-11-08 18:22:43,864 INFO org.apache.hadoop.hbase.HRegion: compaction completed on region triples,,-7804573999218539490. Took 7sec 2007-11-08 18:22:44,045 INFO org.apache.hadoop.hbase.HRegion: Splitting triples,,-7804573999218539490 because largest aggregate size is 117.6m and desired size is 64.0m 2007-11-08 18:22:44,046 INFO org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker: regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker exiting 2007-11-08 18:22:44,048 FATAL org.apache.hadoop.hbase.HRegionServer: Set stop flag in regionserver/0:0:0:0:0:0:0:0:60020.splitOrCompactChecker java.lang.IllegalArgumentException: java.net.URISyntaxException: Illegal character in scheme name at index 7: hregion_triples,http://dblp.l3s.de/d2r/resource/publications/conf/dac/ObermeierK88,5315976128113435400
   at org.apache.hadoop.fs.Path.initialize(Path.java:140)
   at org.apache.hadoop.fs.Path.<init>(Path.java:126)
   at org.apache.hadoop.hbase.HRegion.getRegionDir(HRegion.java:1872)
   at org.apache.hadoop.hbase.HRegion.getSplitRegionDir(HRegion.java:503)
   at org.apache.hadoop.hbase.HRegion.closeAndSplit(HRegion.java:427)
at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.split(HRegionServer.java:217) at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.checkForSplitsOrCompactions(HRegionServer.java:202) at org.apache.hadoop.hbase.HRegionServer$SplitOrCompactChecker.chore(HRegionServer.java:186)
   at org.apache.hadoop.hbase.Chore.run(Chore.java:58)
Caused by: java.net.URISyntaxException: Illegal character in scheme name at index 7: hregion_triples,http://dblp.l3s.de/d2r/resource/publications/conf/dac/ObermeierK88,5315976128113435400
   at java.net.URI$Parser.fail(URI.java:2809)
   at java.net.URI$Parser.checkChars(URI.java:2982)
   at java.net.URI$Parser.parse(URI.java:3009)
   at java.net.URI.<init>(URI.java:736)
   at org.apache.hadoop.fs.Path.initialize(Path.java:137)
   ... 8 more
2007-11-08 18:22:44,050 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 60020, call batchUpdate(triples,,-7804573999218539490, 9223372036854775807, [EMAIL PROTECTED]) from 127.0.1.1:40128: error: java.io.IOException: Server not running
java.io.IOException: Server not running
at org.apache.hadoop.hbase.HRegionServer.checkOpen(HRegionServer.java:1343) at org.apache.hadoop.hbase.HRegionServer.batchUpdate(HRegionServer.java:1108)
   at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)
2007-11-08 18:22:44,054 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 60020, call openScanner(.META.,,1, [Lorg.apache.hadoop.io.Text;@580754fc, triples, 1194546164053, null) from 127.0.1.1:40128: error: java.io.IOException: Server not running
java.io.IOException: Server not running
at org.apache.hadoop.hbase.HRegionServer.checkOpen(HRegionServer.java:1343) at org.apache.hadoop.hbase.HRegionServer.openScanner(HRegionServer.java:1160)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)

Cheers,
Holger

Michael Stack wrote:
Holger Stenzhorn wrote:
...
Therefore setting the heap size correctly for the test class resulted in the solution of the issue for the file at hand. ...but in any case, if such file grows larger and larger then at some point the described OOME will (most likely) occur again.
Have you tried larger files Holger? Your program does not seem to be holding references (nor do classes referenced such as TableOutputFormat or TextInputFormat) so I'm thinking that once you've set your heap to accomodate your programs' steady-state, you should be able to run with bigger files without OOME'ing.

St.Ack

Reply via email to