You are likely running into what many others have who are running the Accumulo client from application servers and is documented in https://issues.apache.org/jira/browse/ACCUMULO-1379. Are you able to upgrade to 1.5.1? If so, check out the cleanup methods that were added as part of https://issues.apache.org/jira/browse/ACCUMULO-2128. If you are stuck on 1.5.0, you can use the code here https://github.com/jaredwinick/accumulo-1858-test/blob/ACCUMULO-2113/src/main/java/org/apache/accumulo/core/util/ClientThreads.javawhich was referred to as "The Hammer" due to its brute force method of cleaning up resources. You can see test results from this code in the ticket at https://issues.apache.org/jira/browse/ACCUMULO-2113 and an example of how to use it from a Servlet at https://github.com/jaredwinick/accumulo-1858-test/blob/ACCUMULO-2113/src/main/java/com/koverse/ApplicationServletContextListener.java .
On Wed, May 7, 2014 at 12:04 PM, David Medinets <[email protected]>wrote: > I am trying to write a web page that paginates through an Accumulo table. > The code works but when Jetty restarts the application I seem to run into > the following error. I'm hoping that I am just forgetting to close a > resource or something similar. I'm using jetty-9.1.3.v20140225 and Accumulo > 1.5.0. > > The error: > > java.lang.OutOfMemoryError: PermGen space > at > org.eclipse.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:109) > > The code: > > Connector connector = null; > Instance instance = new ZooKeeperInstance(accumuloInstanceName, > accumuloZookeeperEnsemble); > try { > connector = instance.getConnector(accumuloUser, > accumuloPassword.getBytes()); > } catch (AccumuloException | AccumuloSecurityException e) { > throw new RuntimeException("Error getting connector from > instance.", e); > } > > tableName = "TedgeField"; > > Scanner scan = null; > try { > scan = connector.createScanner(tableName, new > Authorizations()); > } catch (TableNotFoundException e) { > throw new RuntimeException("Error getting scanning table.", e); > } > scan.setBatchSize(10); > if (lastRow != null) { > scan.setRange(new Range(new Text(lastRow), false, null, true)); > } > > Map<String, Integer> columns = new TreeMap<>(); > > IteratorSetting iter = new IteratorSetting(15, "fieldNames", > RegExFilter.class); > String rowRegex = null; > String colfRegex = null; > String colqRegex = "field"; > String valueRegex = null; > boolean orFields = false; > RegExFilter.setRegexs(iter, rowRegex, colfRegex, colqRegex, > valueRegex, orFields); > scan.addScanIterator(iter); > > int fetchCount = 0; > Iterator<Map.Entry<Key, org.apache.accumulo.core.data.Value>> > iterator = scan.iterator(); > while (iterator.hasNext()) { > Map.Entry<Key, org.apache.accumulo.core.data.Value> entry = > iterator.next(); > String columnName = entry.getKey().getRow().toString(); > Integer entryCount = > Integer.parseInt(entry.getValue().toString()); > columns.put(columnName, entryCount); > fetchCount++; > if (fetchCount > scan.getBatchSize()) { > lastRow = entry.getKey().getRow(); > break; > } > } > > scan.close(); > > I'd be happy to update my D4M_Schema project with this code if anyone > wants to run it locally to validate the error. I didn't want to push broken > code. >
