I have also done some experimenting with jython. In fact, the timings for the wikisearch example:
http://accumulo.apache.org/example/wikisearch.html were collected using a simple jython script that automates the queries. -Eric On Sun, Apr 22, 2012 at 2:56 PM, David Medinets <[email protected]>wrote: > I mentioned groovy the other day and that got me started about > developing a programmable command-line shell. And then it occurred to > me that such a shell already existed and is called JRuby. 45 minutes > later, I present: > > # Start the jruby console > jirb > > # require the jar files that accumulo needs. > > require 'java' > require > '/home/medined/accumulo-1.5.0-SNAPSHOT/lib/commons-jci-core-1.0.jar' > require '/home/medined/accumulo-1.5.0-SNAPSHOT/lib/commons-jci-fam-1.0.jar' > require '/home/medined/accumulo-1.5.0-SNAPSHOT/lib/log4j-1.2.16.jar' > require '/home/medined/accumulo-1.5.0-SNAPSHOT/lib/libthrift-0.6.1.jar' > require > '/home/medined/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar' > require > '/home/medined/.m2/repository/commons-logging/commons-logging/1.0.4/commons-logging-1.0.4.jar' > require > '/home/medined/.m2/repository/org/apache/hadoop/hadoop-core/0.20.2/hadoop-core-0.20.2.jar' > require > '/home/medined/.m2/repository/org/apache/hadoop/zookeeper/3.3.1/zookeeper-3.3.1.jar' > require > '/home/medined/.m2/repository/org/slf4j/slf4j-api/1.5.6/slf4j-api-1.5.6.jar' > require > '/home/medined/.m2/repository/org/slf4j/slf4j-log4j12/1.4.3/slf4j-log4j12-1.4.3.jar' > require > '/home/medined/accumulo-1.5.0-SNAPSHOT/lib/accumulo-start-1.5.0-SNAPSHOT.jar' > require > '/home/medined/accumulo-1.5.0-SNAPSHOT/lib/accumulo-core-1.5.0-SNAPSHOT.jar' > require > '/home/medined/accumulo-1.5.0-SNAPSHOT/lib/cloudtrace-1.5.0-SNAPSHOT.jar' > > # does something basic work? > > org.apache.accumulo.start.Platform.getPlatform > => "Linux-i386-32" > > # how about something a little more complex? > > org.apache.accumulo.start.classloader.AccumuloClassLoader.printClassPath() > ... displays a list of jar files. > > # do some importing to avoid repeating package names all over the code. > > java_import org.apache.accumulo.core.client.ZooKeeperInstance > java_import org.apache.accumulo.core.data.Mutation > java_import org.apache.accumulo.core.data.Value > java_import org.apache.hadoop.io.Text > > # define some variables. > > instanceName = "development" > zooKeepers = "localhost" > user = "root" > pass = "password" > tableName = "users" > > # instantiate the zookeeper and connector objects. > > instance = ZooKeeperInstance.new(instanceName, zooKeepers) > connector = instance.getConnector(user, pass) > > # ensure the accumulo table exists. > > unless connector.tableOperations().exists(tableName) > connector.tableOperations().create(tableName) > end > > # define a ruby method to handle an insert. > > def insert(connector, tableName, rowId, columnFamily, columnQualifier, > value) > writer = connector.createMultiTableBatchWriter(200000, 300, 4) > bw = writer.getBatchWriter(tableName) > m = Mutation.new(Text.new(rowId)) > m.put(Text.new(columnFamily), Text.new(columnQualifier), > Value.new(value.to_java_bytes)) > bw.addMutation(m) > writer.close > end > > # insert the record! > > insert connector, tableName, "rowid", "cf", "cq", "value" > > # celebrate ... >
