[ 
https://issues.apache.org/jira/browse/PHOENIX-1437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14206607#comment-14206607
 ] 

Eli Levine commented on PHOENIX-1437:
-------------------------------------

Taylor, can you add the following to get started: Phoenix version, your schema 
and the query used. Thanks.

> java.lang.OutOfMemoryError: unable to create new native thread
> --------------------------------------------------------------
>
>                 Key: PHOENIX-1437
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1437
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Taylor Finnell
>
> Getting a java.lang.OutOfMemoryError when using Phoenix on Storm. Here is the 
> full stack trace.
> {code}
> java.lang.OutOfMemoryError: unable to create new native thread
>       at java.lang.Thread.start0(Native Method) ~[na:1.7.0_45]
>       at java.lang.Thread.start(java/lang/Thread.java:713) ~[na:1.7.0_45]
>       at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(java/util/concurrent/ThreadPoolExecutor.java:949)
>  ~[na:1.7.0_45]
>       at 
> java.util.concurrent.ThreadPoolExecutor.execute(java/util/concurrent/ThreadPoolExecutor.java:1360)
>  ~[na:1.7.0_45]
>       at 
> java.util.concurrent.AbstractExecutorService.submit(java/util/concurrent/AbstractExecutorService.java:132)
>  ~[na:1.7.0_45]
>       at 
> org.apache.phoenix.iterate.ParallelIterators.submitWork(org/apache/phoenix/iterate/ParallelIterators.java:356)
>  ~[stormjar.jar:na]
>       at 
> org.apache.phoenix.iterate.ParallelIterators.getIterators(org/apache/phoenix/iterate/ParallelIterators.java:265)
>  ~[stormjar.jar:na]
>       at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(org/apache/phoenix/iterate/ConcatResultIterator.java:44)
>  ~[stormjar.jar:na]
>       at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(org/apache/phoenix/iterate/ConcatResultIterator.java:66)
>  ~[stormjar.jar:na]
>       at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(org/apache/phoenix/iterate/ConcatResultIterator.java:86)
>  ~[stormjar.jar:na]
>       at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(org/apache/phoenix/jdbc/PhoenixResultSet.java:732)
>  ~[stormjar.jar:na]
>       at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606) 
> ~[na:1.7.0_45]
>       at 
> RUBY.each(file:/mnt/hadoop/storm/supervisor/stormdist/korrelate_match_log_processor_staging_KOR-2325-online_sync_to_hbase_tf_part_three-1-1415715986/stormjar.jar!/lib/korrelate_match_log_processor/cleanroom_online_event_adapter.rb:51)
>  ~[na:na]
>       at 
> RUBY.finish_batch(file:/mnt/hadoop/storm/supervisor/stormdist/korrelate_match_log_processor_staging_KOR-2325-online_sync_to_hbase_tf_part_three-1-1415715986/stormjar.jar!/lib/korrelate_match_log_processor/bolt/abstract_event_reader_bolt.rb:68)
>  ~[na:na]
>       at 
> RUBY.finishBatch(/Users/tfinnell/.rvm/gems/jruby-1.7.11@O2O-jruby/gems/redstorm-0.6.6/lib/red_storm/proxy/batch_bolt.rb:51)
>  ~[na:na]
>       at 
> redstorm.proxy.BatchBolt.finishBatch(redstorm/proxy/BatchBolt.java:149) 
> ~[stormjar.jar:na]
>       at 
> redstorm.storm.jruby.JRubyTransactionalBolt.finishBatch(redstorm/storm/jruby/JRubyTransactionalBolt.java:56)
>  ~[stormjar.jar:na]
>       at 
> backtype.storm.coordination.BatchBoltExecutor.finishedId(backtype/storm/coordination/BatchBoltExecutor.java:76)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.coordination.CoordinatedBolt.checkFinishId(backtype/storm/coordination/CoordinatedBolt.java:259)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.coordination.CoordinatedBolt.execute(backtype/storm/coordination/CoordinatedBolt.java:322)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.daemon.executor$fn__4329$tuple_action_fn__4331.invoke(executor.clj:630)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.daemon.executor$fn__4329$tuple_action_fn__4331.invoke(backtype/storm/daemon/executor.clj:630)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.daemon.executor$mk_task_receiver$fn__4252.invoke(executor.clj:398)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.daemon.executor$mk_task_receiver$fn__4252.invoke(backtype/storm/daemon/executor.clj:398)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.disruptor$clojure_handler$reify__1747.onEvent(disruptor.clj:58)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.disruptor$clojure_handler$reify__1747.onEvent(backtype/storm/disruptor.clj:58)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(backtype/storm/utils/DisruptorQueue.java:104)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(backtype/storm/utils/DisruptorQueue.java:78)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:77)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.disruptor$consume_batch_when_available.invoke(backtype/storm/disruptor.clj:77)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.daemon.executor$fn__4329$fn__4341$fn__4388.invoke(executor.clj:745)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.daemon.executor$fn__4329$fn__4341$fn__4388.invoke(backtype/storm/daemon/executor.clj:745)
>  ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at backtype.storm.util$async_loop$fn__442.invoke(util.clj:436) 
> ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at 
> backtype.storm.util$async_loop$fn__442.invoke(backtype/storm/util.clj:436) 
> ~[storm-core-0.9.1.2.1.2.0-402.jar:0.9.1.2.1.2.0-402]
>       at clojure.lang.AFn.run(clojure/lang/AFn.java:24) 
> ~[clojure-1.4.0.jar:na]
>       at java.lang.Thread.run(java/lang/Thread.java:744) ~[na:1.7.0_45]
> {code}
> Here is some of the system configuration.
> {code}
> ulimit -a
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 240435
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 240435
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> {code}
> None of the tables I am querying against have more than 4 regions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to