[jira] [Commented] (ACCUMULO-4851) WAL recovery directory should be deleted before running LogSorter

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431479#comment-16431479
 ] 

Josh Elser commented on ACCUMULO-4851:
--

No worries. I think I know what the fix is, just thought I'd mention it to you 
on the off-chance it rang a bell.

> WAL recovery directory should be deleted before running LogSorter
> -
>
> Key: ACCUMULO-4851
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4851
> Project: Accumulo
>  Issue Type: Bug
>  Components: tserver
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.9.0
>
>
> Noticed this one on a user's 1.7-ish system.
> A number of tablets (~9) were unassigned and reported on the Monitor as 
> having failed to load. Digging into the exception, we could see the tablet 
> load failed due to a FileNotFoundException:
> {noformat}
> 2018-04-09 19:57:08,475 [tserver.TabletServer] WARN : exception trying to 
> assign tablet xk;... /accumulo/tables/xk/t-00pyzd0
> java.lang.RuntimeException: java.io.IOException: 
> java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:640)
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:449)
>     at 
> org.apache.accumulo.tserver.TabletServer$AssignmentHandler.run(TabletServer.java:2156)
>     at 
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>     at 
> org.apache.accumulo.tserver.ActiveAssignmentRunnable.run(ActiveAssignmentRunnable.java:61)
>     at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at 
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at 
> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:480)
>     at 
> org.apache.accumulo.tserver.TabletServer.recover(TabletServer.java:3012)
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:590)
>     ... 9 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1446)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438)
>     at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1454)
>     at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1823)
>     at 
> org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:456)
>     at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:429)
>     at org.apache.hadoop.io.MapFile$Reader.(MapFile.java:399)
>     at 
> org.apache.accumulo.tserver.log.MultiReader.(MultiReader.java:113)
>     at 
> org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:105)
>     at 
> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:478)
>     ... 11 more
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : java.io.IOException: 
> java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : failed to open tablet 
> xk;... reporting failure to master
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : rescheduling tablet 
> load in 600.00 seconds
> {noformat}
> Upon further investigation of the recovery directory in HDFS for this WAL, we 
> find the following:
> {noformat}
> $ hdfs dfs -ls -R /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/
> -rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:12 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed
> -rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:10 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/finished
> drwxr-xr-x   - accumulo hdfs  0 2018-04-06 22:09 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-0
> -rw-r--r--   3 accumulo hdfs    8040761 2018-04-06 22:09 
> 

[jira] [Commented] (ACCUMULO-4851) WAL recovery directory should be deleted before running LogSorter

2018-04-09 Thread Dave Marion (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431475#comment-16431475
 ] 

Dave Marion commented on ACCUMULO-4851:
---

I don't remember an issue like this. Sorry I couldn't be of any help here.

> WAL recovery directory should be deleted before running LogSorter
> -
>
> Key: ACCUMULO-4851
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4851
> Project: Accumulo
>  Issue Type: Bug
>  Components: tserver
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.9.0
>
>
> Noticed this one on a user's 1.7-ish system.
> A number of tablets (~9) were unassigned and reported on the Monitor as 
> having failed to load. Digging into the exception, we could see the tablet 
> load failed due to a FileNotFoundException:
> {noformat}
> 2018-04-09 19:57:08,475 [tserver.TabletServer] WARN : exception trying to 
> assign tablet xk;... /accumulo/tables/xk/t-00pyzd0
> java.lang.RuntimeException: java.io.IOException: 
> java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:640)
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:449)
>     at 
> org.apache.accumulo.tserver.TabletServer$AssignmentHandler.run(TabletServer.java:2156)
>     at 
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>     at 
> org.apache.accumulo.tserver.ActiveAssignmentRunnable.run(ActiveAssignmentRunnable.java:61)
>     at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at 
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at 
> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:480)
>     at 
> org.apache.accumulo.tserver.TabletServer.recover(TabletServer.java:3012)
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:590)
>     ... 9 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1446)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438)
>     at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1454)
>     at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1823)
>     at 
> org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:456)
>     at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:429)
>     at org.apache.hadoop.io.MapFile$Reader.(MapFile.java:399)
>     at 
> org.apache.accumulo.tserver.log.MultiReader.(MultiReader.java:113)
>     at 
> org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:105)
>     at 
> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:478)
>     ... 11 more
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : java.io.IOException: 
> java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : failed to open tablet 
> xk;... reporting failure to master
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : rescheduling tablet 
> load in 600.00 seconds
> {noformat}
> Upon further investigation of the recovery directory in HDFS for this WAL, we 
> find the following:
> {noformat}
> $ hdfs dfs -ls -R /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/
> -rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:12 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed
> -rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:10 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/finished
> drwxr-xr-x   - accumulo hdfs  0 2018-04-06 22:09 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-0
> -rw-r--r--   3 accumulo hdfs    8040761 2018-04-06 22:09 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-0/data
> -rw-r--r--   3 

[jira] [Commented] (ACCUMULO-4851) WAL recovery directory should be deleted before running LogSorter

2018-04-09 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/ACCUMULO-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431437#comment-16431437
 ] 

Josh Elser commented on ACCUMULO-4851:
--

[~dlmarion], a phrocker suggested that you might have run into a similar issue 
at some point :)

> WAL recovery directory should be deleted before running LogSorter
> -
>
> Key: ACCUMULO-4851
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4851
> Project: Accumulo
>  Issue Type: Bug
>  Components: tserver
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 1.9.0
>
>
> Noticed this one on a user's 1.7-ish system.
> A number of tablets (~9) were unassigned and reported on the Monitor as 
> having failed to load. Digging into the exception, we could see the tablet 
> load failed due to a FileNotFoundException:
> {noformat}
> 2018-04-09 19:57:08,475 [tserver.TabletServer] WARN : exception trying to 
> assign tablet xk;... /accumulo/tables/xk/t-00pyzd0
> java.lang.RuntimeException: java.io.IOException: 
> java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:640)
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:449)
>     at 
> org.apache.accumulo.tserver.TabletServer$AssignmentHandler.run(TabletServer.java:2156)
>     at 
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>     at 
> org.apache.accumulo.tserver.ActiveAssignmentRunnable.run(ActiveAssignmentRunnable.java:61)
>     at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at 
> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
> exist: /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at 
> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:480)
>     at 
> org.apache.accumulo.tserver.TabletServer.recover(TabletServer.java:3012)
>     at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:590)
>     ... 9 more
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1446)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438)
>     at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1454)
>     at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1823)
>     at 
> org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:456)
>     at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:429)
>     at org.apache.hadoop.io.MapFile$Reader.(MapFile.java:399)
>     at 
> org.apache.accumulo.tserver.log.MultiReader.(MultiReader.java:113)
>     at 
> org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:105)
>     at 
> org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:478)
>     ... 11 more
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : java.io.IOException: 
> java.io.FileNotFoundException: File does not exist: 
> /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : failed to open tablet 
> xk;... reporting failure to master
> 2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : rescheduling tablet 
> load in 600.00 seconds
> {noformat}
> Upon further investigation of the recovery directory in HDFS for this WAL, we 
> find the following:
> {noformat}
> $ hdfs dfs -ls -R /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/
> -rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:12 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed
> -rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:10 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/finished
> drwxr-xr-x   - accumulo hdfs  0 2018-04-06 22:09 
> accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-0
> -rw-r--r--   3 accumulo hdfs    8040761 2018-04-06 22:09 
> 

Accumulo-Master - Build # 2306 - Fixed

2018-04-09 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-Master (build #2306)

Status: Fixed

Check console output at https://builds.apache.org/job/Accumulo-Master/2306/ to 
view the results.

[jira] [Created] (ACCUMULO-4851) WAL recovery directory should be deleted before running LogSorter

2018-04-09 Thread Josh Elser (JIRA)
Josh Elser created ACCUMULO-4851:


 Summary: WAL recovery directory should be deleted before running 
LogSorter
 Key: ACCUMULO-4851
 URL: https://issues.apache.org/jira/browse/ACCUMULO-4851
 Project: Accumulo
  Issue Type: Bug
  Components: tserver
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.9.0


Noticed this one on a user's 1.7-ish system.

A number of tablets (~9) were unassigned and reported on the Monitor as having 
failed to load. Digging into the exception, we could see the tablet load failed 
due to a FileNotFoundException:
{noformat}
2018-04-09 19:57:08,475 [tserver.TabletServer] WARN : exception trying to 
assign tablet xk;... /accumulo/tables/xk/t-00pyzd0
java.lang.RuntimeException: java.io.IOException: java.io.FileNotFoundException: 
File does not exist: 
/accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
    at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:640)
    at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:449)
    at 
org.apache.accumulo.tserver.TabletServer$AssignmentHandler.run(TabletServer.java:2156)
    at 
org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
    at 
org.apache.accumulo.tserver.ActiveAssignmentRunnable.run(ActiveAssignmentRunnable.java:61)
    at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at 
org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: java.io.FileNotFoundException: File does not 
exist: /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
    at 
org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:480)
    at 
org.apache.accumulo.tserver.TabletServer.recover(TabletServer.java:3012)
    at org.apache.accumulo.tserver.tablet.Tablet.(Tablet.java:590)
    ... 9 more
Caused by: java.io.FileNotFoundException: File does not exist: 
/accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
    at 
org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1446)
    at 
org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438)
    at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1454)
    at 
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1823)
    at 
org.apache.hadoop.io.MapFile$Reader.createDataFileReader(MapFile.java:456)
    at org.apache.hadoop.io.MapFile$Reader.open(MapFile.java:429)
    at org.apache.hadoop.io.MapFile$Reader.(MapFile.java:399)
    at 
org.apache.accumulo.tserver.log.MultiReader.(MultiReader.java:113)
    at 
org.apache.accumulo.tserver.log.SortedLogRecovery.recover(SortedLogRecovery.java:105)
    at 
org.apache.accumulo.tserver.log.TabletServerLogger.recover(TabletServerLogger.java:478)
    ... 11 more
2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : java.io.IOException: 
java.io.FileNotFoundException: File does not exist: 
/accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed/data
2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : failed to open tablet 
xk;... reporting failure to master
2018-04-09 19:57:08,476 [tserver.TabletServer] WARN : rescheduling tablet load 
in 600.00 seconds
{noformat}
Upon further investigation of the recovery directory in HDFS for this WAL, we 
find the following:
{noformat}
$ hdfs dfs -ls -R /accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/
-rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:12 
accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/failed
-rwxr--r--   3 accumulo hdfs  0 2018-04-06 22:10 
accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/finished
drwxr-xr-x   - accumulo hdfs  0 2018-04-06 22:09 
accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-0
-rw-r--r--   3 accumulo hdfs    8040761 2018-04-06 22:09 
accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-0/data
-rw-r--r--   3 accumulo hdfs    642 2018-04-06 22:09 
accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-0/index
drwxr-xr-x   - accumulo hdfs  0 2018-04-06 22:10 
accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-1
-rw-r--r--   3 accumulo hdfs    8540196 2018-04-06 22:10 
accumulo/recovery/0421c824-5e48-4bad-917a-b54a34a45849/part-r-1/data
-rw-r--r--   3 accumulo hdfs    524 2018-04-06 22:10 

[GitHub] ctubbsii commented on a change in pull request #19: Fixes #16 - Refactored examples in client.md

2018-04-09 Thread GitBox
ctubbsii commented on a change in pull request #19: Fixes #16 - Refactored 
examples in client.md
URL: https://github.com/apache/accumulo-examples/pull/19#discussion_r180233549
 
 

 ##
 File path: 
src/main/java/org/apache/accumulo/examples/client/ReadWriteExample.java
 ##
 @@ -17,135 +17,48 @@
 package org.apache.accumulo.examples.client;
 
 import java.util.Map.Entry;
-import java.util.SortedSet;
-import java.util.TreeSet;
 
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Durability;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.impl.DurabilityImpl;
+import org.apache.accumulo.core.client.TableExistsException;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.util.ByteArraySet;
-import org.apache.accumulo.examples.cli.ClientOnDefaultTable;
-import org.apache.accumulo.examples.cli.ScannerOpts;
-import org.apache.hadoop.io.Text;
-
-import com.beust.jcommander.IStringConverter;
-import com.beust.jcommander.Parameter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class ReadWriteExample {
-  // defaults
-  private static final String DEFAULT_AUTHS = "LEVEL1,GROUP1";
-  private static final String DEFAULT_TABLE_NAME = "test";
-
-  private Connector conn;
 
-  static class DurabilityConverter implements IStringConverter {
-@Override
-public Durability convert(String value) {
-  return DurabilityImpl.fromString(value);
-}
-  }
-
-  static class Opts extends ClientOnDefaultTable {
-@Parameter(names = {"--createtable"}, description = "create table before 
doing anything")
-boolean createtable = false;
-@Parameter(names = {"--deletetable"}, description = "delete table when 
finished")
-boolean deletetable = false;
-@Parameter(names = {"--create"}, description = "create entries before any 
deletes")
-boolean createEntries = false;
-@Parameter(names = {"--read"}, description = "read entries after any 
creates/deletes")
-boolean readEntries = false;
-@Parameter(names = {"--delete"}, description = "delete entries after any 
creates")
-boolean deleteEntries = false;
-@Parameter(names = {"--durability"}, description = "durability used for 
writes (none, log, flush or sync)", converter = DurabilityConverter.class)
-Durability durability = Durability.DEFAULT;
-
-public Opts() {
-  super(DEFAULT_TABLE_NAME);
-  auths = new Authorizations(DEFAULT_AUTHS.split(","));
-}
-  }
-
-  // hidden constructor
-  private ReadWriteExample() {}
-
-  private void execute(Opts opts, ScannerOpts scanOpts) throws Exception {
-conn = opts.getConnector();
-
-// add the authorizations to the user
-Authorizations userAuthorizations = 
conn.securityOperations().getUserAuthorizations(opts.getPrincipal());
-ByteArraySet auths = new 
ByteArraySet(userAuthorizations.getAuthorizations());
-auths.addAll(opts.auths.getAuthorizations());
-if (!auths.isEmpty())
-  conn.securityOperations().changeUserAuthorizations(opts.getPrincipal(), 
new Authorizations(auths));
-
-// create table
-if (opts.createtable) {
-  SortedSet partitionKeys = new TreeSet<>();
-  for (int i = Byte.MIN_VALUE; i < Byte.MAX_VALUE; i++)
-partitionKeys.add(new Text(new byte[] {(byte) i}));
-  conn.tableOperations().create(opts.getTableName());
-  conn.tableOperations().addSplits(opts.getTableName(), partitionKeys);
-}
+  private static final Logger log = 
LoggerFactory.getLogger(ReadWriteExample.class);
 
 Review comment:
   I wouldn't even bother having a logger for the client code. It may make more 
sense for the example client code to just print to the console directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ctubbsii commented on a change in pull request #19: Fixes #16 - Refactored examples in client.md

2018-04-09 Thread GitBox
ctubbsii commented on a change in pull request #19: Fixes #16 - Refactored 
examples in client.md
URL: https://github.com/apache/accumulo-examples/pull/19#discussion_r180233720
 
 

 ##
 File path: 
src/main/java/org/apache/accumulo/examples/client/ReadWriteExample.java
 ##
 @@ -17,135 +17,48 @@
 package org.apache.accumulo.examples.client;
 
 import java.util.Map.Entry;
-import java.util.SortedSet;
-import java.util.TreeSet;
 
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Durability;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.impl.DurabilityImpl;
+import org.apache.accumulo.core.client.TableExistsException;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.util.ByteArraySet;
-import org.apache.accumulo.examples.cli.ClientOnDefaultTable;
-import org.apache.accumulo.examples.cli.ScannerOpts;
-import org.apache.hadoop.io.Text;
-
-import com.beust.jcommander.IStringConverter;
-import com.beust.jcommander.Parameter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class ReadWriteExample {
-  // defaults
-  private static final String DEFAULT_AUTHS = "LEVEL1,GROUP1";
-  private static final String DEFAULT_TABLE_NAME = "test";
-
-  private Connector conn;
 
-  static class DurabilityConverter implements IStringConverter {
-@Override
-public Durability convert(String value) {
-  return DurabilityImpl.fromString(value);
-}
-  }
-
-  static class Opts extends ClientOnDefaultTable {
-@Parameter(names = {"--createtable"}, description = "create table before 
doing anything")
-boolean createtable = false;
-@Parameter(names = {"--deletetable"}, description = "delete table when 
finished")
-boolean deletetable = false;
-@Parameter(names = {"--create"}, description = "create entries before any 
deletes")
-boolean createEntries = false;
-@Parameter(names = {"--read"}, description = "read entries after any 
creates/deletes")
-boolean readEntries = false;
-@Parameter(names = {"--delete"}, description = "delete entries after any 
creates")
-boolean deleteEntries = false;
-@Parameter(names = {"--durability"}, description = "durability used for 
writes (none, log, flush or sync)", converter = DurabilityConverter.class)
-Durability durability = Durability.DEFAULT;
-
-public Opts() {
-  super(DEFAULT_TABLE_NAME);
-  auths = new Authorizations(DEFAULT_AUTHS.split(","));
-}
-  }
-
-  // hidden constructor
-  private ReadWriteExample() {}
-
-  private void execute(Opts opts, ScannerOpts scanOpts) throws Exception {
-conn = opts.getConnector();
-
-// add the authorizations to the user
-Authorizations userAuthorizations = 
conn.securityOperations().getUserAuthorizations(opts.getPrincipal());
-ByteArraySet auths = new 
ByteArraySet(userAuthorizations.getAuthorizations());
-auths.addAll(opts.auths.getAuthorizations());
-if (!auths.isEmpty())
-  conn.securityOperations().changeUserAuthorizations(opts.getPrincipal(), 
new Authorizations(auths));
-
-// create table
-if (opts.createtable) {
-  SortedSet partitionKeys = new TreeSet<>();
-  for (int i = Byte.MIN_VALUE; i < Byte.MAX_VALUE; i++)
-partitionKeys.add(new Text(new byte[] {(byte) i}));
-  conn.tableOperations().create(opts.getTableName());
-  conn.tableOperations().addSplits(opts.getTableName(), partitionKeys);
-}
+  private static final Logger log = 
LoggerFactory.getLogger(ReadWriteExample.class);
 
-// send mutations
-createEntries(opts);
+  public static void main(String[] args) throws Exception {
 
-// read entries
-if (opts.readEntries) {
-  // Note that the user needs to have the authorizations for the specified 
scan authorizations
-  // by an administrator first
-  Scanner scanner = conn.createScanner(opts.getTableName(), opts.auths);
-  scanner.setBatchSize(scanOpts.scanBatchSize);
-  for (Entry entry : scanner)
-System.out.println(entry.getKey().toString() + " -> " + 
entry.getValue().toString());
+Connector connector = 
Connector.builder().usingProperties("conf/accumulo-client.properties").build();
+try {
+  connector.tableOperations().create("readwrite");
 
 Review comment:
   Would be nice to put the example tables in an "example" namespace.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this 

[GitHub] ctubbsii commented on a change in pull request #19: Fixes #16 - Refactored examples in client.md

2018-04-09 Thread GitBox
ctubbsii commented on a change in pull request #19: Fixes #16 - Refactored 
examples in client.md
URL: https://github.com/apache/accumulo-examples/pull/19#discussion_r180234356
 
 

 ##
 File path: 
src/main/java/org/apache/accumulo/examples/client/ReadWriteExample.java
 ##
 @@ -17,135 +17,48 @@
 package org.apache.accumulo.examples.client;
 
 import java.util.Map.Entry;
-import java.util.SortedSet;
-import java.util.TreeSet;
 
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Connector;
-import org.apache.accumulo.core.client.Durability;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.impl.DurabilityImpl;
+import org.apache.accumulo.core.client.TableExistsException;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.util.ByteArraySet;
-import org.apache.accumulo.examples.cli.ClientOnDefaultTable;
-import org.apache.accumulo.examples.cli.ScannerOpts;
-import org.apache.hadoop.io.Text;
-
-import com.beust.jcommander.IStringConverter;
-import com.beust.jcommander.Parameter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class ReadWriteExample {
-  // defaults
-  private static final String DEFAULT_AUTHS = "LEVEL1,GROUP1";
-  private static final String DEFAULT_TABLE_NAME = "test";
-
-  private Connector conn;
 
-  static class DurabilityConverter implements IStringConverter {
-@Override
-public Durability convert(String value) {
-  return DurabilityImpl.fromString(value);
-}
-  }
-
-  static class Opts extends ClientOnDefaultTable {
-@Parameter(names = {"--createtable"}, description = "create table before 
doing anything")
-boolean createtable = false;
-@Parameter(names = {"--deletetable"}, description = "delete table when 
finished")
-boolean deletetable = false;
-@Parameter(names = {"--create"}, description = "create entries before any 
deletes")
-boolean createEntries = false;
-@Parameter(names = {"--read"}, description = "read entries after any 
creates/deletes")
-boolean readEntries = false;
-@Parameter(names = {"--delete"}, description = "delete entries after any 
creates")
-boolean deleteEntries = false;
-@Parameter(names = {"--durability"}, description = "durability used for 
writes (none, log, flush or sync)", converter = DurabilityConverter.class)
-Durability durability = Durability.DEFAULT;
-
-public Opts() {
-  super(DEFAULT_TABLE_NAME);
-  auths = new Authorizations(DEFAULT_AUTHS.split(","));
-}
-  }
-
-  // hidden constructor
-  private ReadWriteExample() {}
-
-  private void execute(Opts opts, ScannerOpts scanOpts) throws Exception {
-conn = opts.getConnector();
-
-// add the authorizations to the user
-Authorizations userAuthorizations = 
conn.securityOperations().getUserAuthorizations(opts.getPrincipal());
-ByteArraySet auths = new 
ByteArraySet(userAuthorizations.getAuthorizations());
-auths.addAll(opts.auths.getAuthorizations());
-if (!auths.isEmpty())
-  conn.securityOperations().changeUserAuthorizations(opts.getPrincipal(), 
new Authorizations(auths));
-
-// create table
-if (opts.createtable) {
-  SortedSet partitionKeys = new TreeSet<>();
-  for (int i = Byte.MIN_VALUE; i < Byte.MAX_VALUE; i++)
-partitionKeys.add(new Text(new byte[] {(byte) i}));
-  conn.tableOperations().create(opts.getTableName());
-  conn.tableOperations().addSplits(opts.getTableName(), partitionKeys);
-}
+  private static final Logger log = 
LoggerFactory.getLogger(ReadWriteExample.class);
 
-// send mutations
-createEntries(opts);
+  public static void main(String[] args) throws Exception {
 
-// read entries
-if (opts.readEntries) {
-  // Note that the user needs to have the authorizations for the specified 
scan authorizations
-  // by an administrator first
-  Scanner scanner = conn.createScanner(opts.getTableName(), opts.auths);
-  scanner.setBatchSize(scanOpts.scanBatchSize);
-  for (Entry entry : scanner)
-System.out.println(entry.getKey().toString() + " -> " + 
entry.getValue().toString());
+Connector connector = 
Connector.builder().usingProperties("conf/accumulo-client.properties").build();
+try {
+  connector.tableOperations().create("readwrite");
+} catch (TableExistsException e) {
+  // ignore
 }
 
-// delete table
-if (opts.deletetable)
-  conn.tableOperations().delete(opts.getTableName());
-  }
-
-  private void createEntries(Opts opts) throws Exception {
-if (opts.createEntries || opts.deleteEntries) {
-  BatchWriterConfig cfg = new 

Accumulo-1.8 - Build # 292 - Fixed

2018-04-09 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-1.8 (build #292)

Status: Fixed

Check console output at https://builds.apache.org/job/Accumulo-1.8/292/ to view 
the results.

[GitHub] mikewalch opened a new pull request #19: Fixes #16 - Refactored examples in client.md

2018-04-09 Thread GitBox
mikewalch opened a new pull request #19: Fixes #16 - Refactored examples in 
client.md
URL: https://github.com/apache/accumulo-examples/pull/19
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Accumulo-Pull-Requests - Build # 1116 - Failure

2018-04-09 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-Pull-Requests (build #1116)

Status: Failure

Check console output at 
https://builds.apache.org/job/Accumulo-Pull-Requests/1116/ to view the results.

[GitHub] milleruntime opened a new issue #18: Add debug flag to runex

2018-04-09 Thread GitBox
milleruntime opened a new issue #18: Add debug flag to runex
URL: https://github.com/apache/accumulo-examples/issues/18
 
 
   Would be nice to add a flag to the ./bin/runex script that will use the -X 
maven option instead of -q


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ctubbsii closed pull request #417: Make TLSv1.2 the default

2018-04-09 Thread GitBox
ctubbsii closed pull request #417: Make TLSv1.2 the default
URL: https://github.com/apache/accumulo/pull/417
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/main/java/org/apache/accumulo/core/conf/Property.java 
b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
index eea039f75d..1eed867d01 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/Property.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
@@ -130,11 +130,9 @@
   + "javax.net.ssl.* Accumulo properties"),
   RPC_SSL_CIPHER_SUITES("rpc.ssl.cipher.suites", "", PropertyType.STRING,
   "Comma separated list of cipher suites that can be used by accepted 
connections"),
-  RPC_SSL_ENABLED_PROTOCOLS("rpc.ssl.server.enabled.protocols", 
"TLSv1,TLSv1.1,TLSv1.2",
-  PropertyType.STRING,
+  RPC_SSL_ENABLED_PROTOCOLS("rpc.ssl.server.enabled.protocols", "TLSv1.2", 
PropertyType.STRING,
   "Comma separated list of protocols that can be used to accept 
connections"),
-  // TLSv1.2 should be used as the default when JDK6 support is dropped
-  RPC_SSL_CLIENT_PROTOCOL("rpc.ssl.client.protocol", "TLSv1", 
PropertyType.STRING,
+  RPC_SSL_CLIENT_PROTOCOL("rpc.ssl.client.protocol", "TLSv1.2", 
PropertyType.STRING,
   "The protocol used to connect to a secure server, must be in the list of 
enabled protocols "
   + "on the server side (rpc.ssl.server.enabled.protocols)"),
   /**
@@ -580,8 +578,8 @@
   MONITOR_SSL_EXCLUDE_CIPHERS("monitor.ssl.exclude.ciphers", "", 
PropertyType.STRING,
   "A comma-separated list of disallowed SSL Ciphers, see"
   + " monitor.ssl.include.ciphers to allow ciphers"),
-  MONITOR_SSL_INCLUDE_PROTOCOLS("monitor.ssl.include.protocols", 
"TLSv1,TLSv1.1,TLSv1.2",
-  PropertyType.STRING, "A comma-separate list of allowed SSL protocols"),
+  MONITOR_SSL_INCLUDE_PROTOCOLS("monitor.ssl.include.protocols", "TLSv1.2", 
PropertyType.STRING,
+  "A comma-separate list of allowed SSL protocols"),
 
   MONITOR_LOCK_CHECK_INTERVAL("monitor.lock.check.interval", "5s", 
PropertyType.TIMEDURATION,
   "The amount of time to sleep between checking for the Montior ZooKeeper 
lock"),


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ctubbsii commented on issue #417: Make TLSv1.2 the default

2018-04-09 Thread GitBox
ctubbsii commented on issue #417: Make TLSv1.2 the default
URL: https://github.com/apache/accumulo/pull/417#issuecomment-379886655
 
 
   @PircDef Maybe... that would be a bigger change, and more testing to ensure 
correctness. This is a simple configuration defaults change vs. changing 
currently functioning code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ctubbsii commented on issue #402: ACCUMULO-4615: Updated get status for thread safety and with a per-task timeout

2018-04-09 Thread GitBox
ctubbsii commented on issue #402: ACCUMULO-4615: Updated get status for thread 
safety and with a per-task timeout
URL: https://github.com/apache/accumulo/pull/402#issuecomment-379885909
 
 
   Resolved conflicts in PR with updated 1.8 branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] keith-turner commented on a change in pull request #402: ACCUMULO-4615: Updated get status for thread safety and with a per-task timeout

2018-04-09 Thread GitBox
keith-turner commented on a change in pull request #402: ACCUMULO-4615: Updated 
get status for thread safety and with a per-task timeout
URL: https://github.com/apache/accumulo/pull/402#discussion_r180182724
 
 

 ##
 File path: 
server/master/src/main/java/org/apache/accumulo/master/TimeoutTaskExecutor.java
 ##
 @@ -0,0 +1,249 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.master;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.concurrent.NotThreadSafe;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Objects;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+
+/**
+ * Runs one or more tasks with a timeout per task (instead of a timeout for 
the entire pool). Uses callbacks to invoke functions on successful, timed out, 
or
+ * tasks that error.
+ *
+ * This class uses an underlying fixed thread pool to schedule the submitted 
tasks. Once a task is submitted, the desired end time for the task is recorded 
and
+ * used to determine the timeout for the task's associated {@link Future}.
+ *
+ * The timeout will not be exact as the start time is recorded prior to 
submitting the {@link Callable}. This may result in an effective timeout that is
+ * slightly smaller than expected. The timeout used during initialization 
should be adjusted accordingly.
+ *
+ * The {@link TimeoutTaskExecutor} itself is not a thread-safe class. Only a 
single thread should submit tasks and complete them.
+ *
+ * @param  The return type for the corresponding Callable.
+ * @param  The type of Callable submitted to this executor.
+ */
+@NotThreadSafe
+public class TimeoutTaskExecutor implements 
AutoCloseable {
+
+  private final static Logger log = 
LoggerFactory.getLogger(TimeoutTaskExecutor.class);
+
+  private final long timeoutInNanos;
+  private final ExecutorService executorService;
+  private final BlockingQueue startedTasks;
+  private final List wrappedTasks;
+
+  private SuccessCallback successCallback;
+  private ExceptionCallback exceptionCallback;
+  private TimeoutCallback timeoutCallback;
+
+  /**
+   * Constructs a new TimeoutTaskExecutor that will use the given number of 
worker threads and timeout. Takes an expected number of Callables to initialize 
the
+   * underlying data structures appropriately.
+   * 
+   * If the expectedNumCallables is sized too small, this executor will block 
on calls to submit() once the internal queue is full.
+   *
+   * @param numThreads   The number of threads to use.
+   * @param timeoutInMillis  The timeout for each task in milliseconds.
+   * @param expectedNumCallables The expected number of callables you will 
schedule. Note this is used for an underlying BlockingQueue. If sized too small 
this will cause blocking
+   * when calling submit().
+   * @throws IllegalArgumentException If numThreads is less than 1 or 
expectedNumCallables is negative.
+   */
+  public TimeoutTaskExecutor(int numThreads, long timeoutInMillis, int 
expectedNumCallables) {
+Preconditions.checkArgument(numThreads >= 1, "Number of threads must be at 
least 1.");
+Preconditions.checkArgument(expectedNumCallables >= 0, "The expected 
number of callables must be non-negative.");
+
+this.executorService = Executors.newFixedThreadPool(numThreads);
+this.startedTasks = new ArrayBlockingQueue<>(expectedNumCallables);
+this.timeoutInNanos = TimeUnit.MILLISECONDS.toNanos(timeoutInMillis);
+this.wrappedTasks = new ArrayList<>(expectedNumCallables);
+  }
+
+  /**
+   * Submits a new task to the executor.
+   *
+   * @param callable Task to run
+   */
+  public void submit(C callable) {
 
 Review comment:
   It would be nice to add a sanity check that makes this fail if complete() 

[GitHub] milleruntime opened a new issue #17: Update bulkIngest with new Connector API

2018-04-09 Thread GitBox
milleruntime opened a new issue #17: Update bulkIngest with new Connector API
URL: https://github.com/apache/accumulo-examples/issues/17
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] keith-turner commented on issue #402: ACCUMULO-4615: Updated get status for thread safety and with a per-task timeout

2018-04-09 Thread GitBox
keith-turner commented on issue #402: ACCUMULO-4615: Updated get status for 
thread safety and with a per-task timeout
URL: https://github.com/apache/accumulo/pull/402#issuecomment-379835988
 
 
   I prototyped my half baked idea. The prototype can be found in commit 
keith-turner@5044241ca445a0bbc48e38931f09f77d8acb44b9  which is in the branch 
[keith-turner/ACCUMULO-4615](https://github.com/keith-turner/accumulo/tree/ACCUMULO-4615).
  I am not proposing we do this, just posting it for consideration.  The 
concept needs to be tested on a busy cluster.
   
   I say this idea is half baked because I have never used the thrift send and 
recv method separately.  I suspect it may work well, but I am not sure.  I 
think in the case where a tablet server is using THsHaServer it may work really 
well because there is a dedicated thread to accept connections (even if all 
threads in the servers thread pool for processing connections are busy).  
However in the case where the tserver uses the TThreadPoolServer (this is used 
when Kerberos is used and it can also be configured) there is no dedicated 
thread to accept connections, so the send call from the master may block if all 
server threads are busy (I am not sure).  A dedicated thrift server and port 
for getting status in tablet servers would solve this issue.
   
   A completely different way of solving this issue may be to use the thrift 
async client which uses non-blocking sockets for connections.  However I Am not 
sure how well this would work with SSL and Kerberos and I have never used the 
thrift async client.  I am interested in learning more about it though, but 
have not yet found the time to do so.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mikewalch opened a new issue #16: Update examples in client.md

2018-04-09 Thread GitBox
mikewalch opened a new issue #16: Update examples in client.md
URL: https://github.com/apache/accumulo-examples/issues/16
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mikewalch closed pull request #14: Update batch example to use new Connector builder

2018-04-09 Thread GitBox
mikewalch closed pull request #14: Update batch example to use new Connector 
builder
URL: https://github.com/apache/accumulo-examples/pull/14
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/README.md b/README.md
index c4450eb..dbea091 100644
--- a/README.md
+++ b/README.md
@@ -34,10 +34,11 @@ Before running any of the examples, the following steps 
must be performed.
 git clone https://github.com/apache/accumulo-examples.git
 mvn clean package
 
-4. Specify Accumulo connection information.  All examples read connection 
information from a 
-   properties file. Copy the template and edit it.
+4. Specify Accumulo connection information in 
`conf/accumulo-client.properties`.  Some old examples
+   still read connection information from an examples.conf file so that should 
also be configured.
 
 cd accumulo-examples
+nano conf/accumulo-client.properties
 cp examples.conf.template examples.conf
 nano examples.conf
 
diff --git a/docs/batch.md b/docs/batch.md
index 19acf84..305a732 100644
--- a/docs/batch.md
+++ b/docs/batch.md
@@ -16,42 +16,40 @@ limitations under the License.
 -->
 # Apache Accumulo Batch Writing and Scanning Example
 
-This tutorial uses the following Java classes:
-
- * [SequentialBatchWriter.java] - writes mutations with sequential rows and 
random values
- * [RandomBatchWriter.java] - used by SequentialBatchWriter to generate random 
values
- * [RandomBatchScanner.java] - reads random rows and verifies their values
-
 This is an example of how to use the BatchWriter and BatchScanner.
 
-First, you must ensure that the user you are running with (i.e `myuser` below) 
has the
-`exampleVis` authorization.
-
-$ accumulo shell -u root -e "setauths -u myuser -s exampleVis"
-
-Second, you must create the table, batchtest1, ahead of time.
-
-$ accumulo shell -u root -e "createtable batchtest1"
-
-The command below adds 1 entries with random 50 bytes values to Accumulo.
+This tutorial uses the following Java classes.
 
-$ ./bin/runex client.SequentialBatchWriter -c ./examples.conf -t 
batchtest1 --start 0 --num 1 --size 50 --batchMemory 20M --batchLatency 500 
--batchThreads 20 --vis exampleVis
-
-The command below will do 100 random queries.
-
-$ ./bin/runex client.RandomBatchScanner -c ./examples.conf -t batchtest1 
--num 100 --min 0 --max 1 --size 50 --scanThreads 20 --auths exampleVis
-
-07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 
random queries...
-07 11:33:11,112 [client.CountingVerifyingReceiver] INFO : finished
-07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : 694.44 
lookups/sec   0.14 secs
-
-07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : num results : 100
-
-07 11:33:11,364 [client.CountingVerifyingReceiver] INFO : Generating 100 
random queries...
-07 11:33:11,370 [client.CountingVerifyingReceiver] INFO : finished
-07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 
lookups/sec   0.05 secs
+ * [SequentialBatchWriter.java] - writes mutations with sequential rows and 
random values
+ * [RandomBatchScanner.java] - reads random rows and verifies their values
 
-07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100
+Run `SequentialBatchWriter` to add 1 entries with random 50 bytes values 
to Accumulo.
+
+$ ./bin/runex client.SequentialBatchWriter
+
+Verify data was ingested by scanning the table using the Accumulo shell:
+
+$ accumulo shell
+root@instance> table batch
+root@instance batch> scan
+
+Run `RandomBatchScanner` to perform 1000 random queries and verify the results.
+
+$ ./bin/runex client.RandomBatchScanner
+16:04:05,950 [examples.client.RandomBatchScanner] INFO : Generating 1000 
random ranges for BatchScanner to read
+16:04:06,020 [examples.client.RandomBatchScanner] INFO : Reading ranges 
using BatchScanner
+16:04:06,283 [examples.client.RandomBatchScanner] TRACE: 100 lookups
+16:04:06,290 [examples.client.RandomBatchScanner] TRACE: 200 lookups
+16:04:06,294 [examples.client.RandomBatchScanner] TRACE: 300 lookups
+16:04:06,297 [examples.client.RandomBatchScanner] TRACE: 400 lookups
+16:04:06,301 [examples.client.RandomBatchScanner] TRACE: 500 lookups
+16:04:06,304 [examples.client.RandomBatchScanner] TRACE: 600 lookups
+16:04:06,307 [examples.client.RandomBatchScanner] TRACE: 700 lookups
+16:04:06,309 [examples.client.RandomBatchScanner] TRACE: 800 lookups
+16:04:06,316 [examples.client.RandomBatchScanner] TRACE: 900 lookups
+16:04:06,320 [examples.client.RandomBatchScanner] TRACE: 1000 lookups
+16:04:06,330 [examples.client.RandomBatchScanner] INFO : Scan finished! 

[GitHub] mikewalch commented on issue #14: Update batch example to use new Connector builder

2018-04-09 Thread GitBox
mikewalch commented on issue #14: Update batch example to use new Connector 
builder
URL: https://github.com/apache/accumulo-examples/pull/14#issuecomment-379823027
 
 
   @milleruntime, if you want to help that would be great.  i'll create an 
issue and assign myself to any example that i am updating. if you do the same, 
we won't work on the same one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mikewalch opened a new issue #15: Remove examples.conf configuration

2018-04-09 Thread GitBox
mikewalch opened a new issue #15: Remove examples.conf configuration 
URL: https://github.com/apache/accumulo-examples/issues/15
 
 
   Examples are being transitioned to use `conf/accumulo-client.properties`.  
After all examples are transitioned, the `examples.conf` file should be removed 
and any docs referencing it should also be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] milleruntime commented on a change in pull request #14: Update batch example to use new Connector builder

2018-04-09 Thread GitBox
milleruntime commented on a change in pull request #14: Update batch example to 
use new Connector builder
URL: https://github.com/apache/accumulo-examples/pull/14#discussion_r180136997
 
 

 ##
 File path: README.md
 ##
 @@ -34,10 +34,11 @@ Before running any of the examples, the following steps 
must be performed.
 git clone https://github.com/apache/accumulo-examples.git
 mvn clean package
 
-4. Specify Accumulo connection information.  All examples read connection 
information from a 
-   properties file. Copy the template and edit it.
+4. Specify Accumulo connection information in 
`conf/accumulo-client.properties`.  Some old examples
+   still read connection information from an examples.conf file so that should 
also be configured.
 
 Review comment:
   I take it you are just going through and updating each example one by one?  
I think that is fine, but maybe make a note somewhere (create an issue?) to 
consolidate the two files when its done.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] PircDef commented on issue #417: Make TLSv1.2 the default

2018-04-09 Thread GitBox
PircDef commented on issue #417: Make TLSv1.2 the default
URL: https://github.com/apache/accumulo/pull/417#issuecomment-379788643
 
 
   Is there an intent to remove ProtocolOverridingSSLSocketFactory as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services