[GitHub] jomach commented on a change in pull request #325: ACCUMULO-2341?

2017-11-30 Thread GitBox
jomach commented on a change in pull request #325: ACCUMULO-2341?
URL: https://github.com/apache/accumulo/pull/325#discussion_r154278852
 
 

 ##
 File path: server/base/src/main/java/org/apache/accumulo/server/util/Admin.java
 ##
 @@ -354,12 +354,8 @@ public void run() {
   }
 
   private static void stopServer(final ClientContext context, final boolean 
tabletServersToo) throws AccumuloException, AccumuloSecurityException {
-MasterClient.executeVoid(context, new 
ClientExec() {
-  @Override
-  public void execute(MasterClientService.Client client) throws Exception {
-client.shutdown(Tracer.traceInfo(), context.rpcCreds(), 
tabletServersToo);
-  }
-});
+MasterClient.executeVoidWithConnRetry(context,
 
 Review comment:
   What if we have  some networking issue ?  This way we cannot recover...  
   IMHO we should have a call only for the admin tasks so that we can fail 
directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (ACCUMULO-4546) IllegalTableTransitionException should include a default message that logs the requested state transition

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner resolved ACCUMULO-4546.

Resolution: Fixed

> IllegalTableTransitionException should include a default message that logs 
> the requested state transition
> -
>
> Key: ACCUMULO-4546
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4546
> Project: Accumulo
>  Issue Type: Bug
>  Components: server-base
>Affects Versions: 1.6.6
>Reporter: John Vines
>Assignee: Mark Owens
>  Labels: pull-request-available
> Fix For: 1.7.4, 1.8.2, 2.0.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> While trying to track down the root of an Illegal state transition for a 
> table, I hit a dead end when the original transition to bring a table online 
> failed. The IllegalTableTransitionException takes in the old and new states 
> in the contstructor for the exception, but these states are not used to 
> construct any sort of message, so this information isn't available in the 
> logs. We should have a default message for this constructor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (ACCUMULO-4740) Enable GCM mode for crypto

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner resolved ACCUMULO-4740.

Resolution: Fixed

> Enable GCM mode for crypto
> --
>
> Key: ACCUMULO-4740
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4740
> Project: Accumulo
>  Issue Type: Improvement
>Reporter: Nick Felts
>Assignee: Nick Felts
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Enable the use of GCM as an optional encryption mode.   
> While this change will allow for GCM, it should probably only be used for 
> Java 9 and later.  
> https://docs.oracle.com/javase/9/whatsnew/toc.htm#JSNEW-GUID-71A09701-7412-4499-A88D-53FA8BFBD3D0
>   
> http://openjdk.java.net/jeps/246



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ACCUMULO-4740) Enable GCM mode for crypto

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner updated ACCUMULO-4740:
---
Fix Version/s: 2.0.0

> Enable GCM mode for crypto
> --
>
> Key: ACCUMULO-4740
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4740
> Project: Accumulo
>  Issue Type: Improvement
>Reporter: Nick Felts
>Assignee: Nick Felts
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Enable the use of GCM as an optional encryption mode.   
> While this change will allow for GCM, it should probably only be used for 
> Java 9 and later.  
> https://docs.oracle.com/javase/9/whatsnew/toc.htm#JSNEW-GUID-71A09701-7412-4499-A88D-53FA8BFBD3D0
>   
> http://openjdk.java.net/jeps/246



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ACCUMULO-4740) Enable GCM mode for crypto

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner updated ACCUMULO-4740:
---
Affects Version/s: (was: 2.0.0)

> Enable GCM mode for crypto
> --
>
> Key: ACCUMULO-4740
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4740
> Project: Accumulo
>  Issue Type: Improvement
>Reporter: Nick Felts
>Assignee: Nick Felts
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Enable the use of GCM as an optional encryption mode.   
> While this change will allow for GCM, it should probably only be used for 
> Java 9 and later.  
> https://docs.oracle.com/javase/9/whatsnew/toc.htm#JSNEW-GUID-71A09701-7412-4499-A88D-53FA8BFBD3D0
>   
> http://openjdk.java.net/jeps/246



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] keith-turner commented on issue #328: ACCUMULO-4743 Replaced general custom with tserver prefix for cache config

2017-11-30 Thread GitBox
keith-turner commented on issue #328: ACCUMULO-4743 Replaced general custom 
with tserver prefix for cache config
URL: https://github.com/apache/accumulo/pull/328#issuecomment-348401636
 
 
   @jkrdev  I tested this with 
[keith-turner/accumulo-ohc](https://github.com/keith-turner/accumulo-ohc) and 
it worked fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] keith-turner commented on a change in pull request #328: ACCUMULO-4743 Replaced general custom with tserver prefix for cache config

2017-11-30 Thread GitBox
keith-turner commented on a change in pull request #328: ACCUMULO-4743 Replaced 
general custom with tserver prefix for cache config
URL: https://github.com/apache/accumulo/pull/328#discussion_r154265158
 
 

 ##
 File path: 
core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/BlockCacheManager.java
 ##
 @@ -30,7 +30,7 @@
 
   private final Map caches = new HashMap<>();
 
-  public static final String CACHE_PROPERTY_BASE = 
Property.GENERAL_ARBITRARY_PROP_PREFIX + "cache.";
+  public static final String CACHE_PROPERTY_BASE = Property.TSERV_PREFIX + 
"cache.";
 
 Review comment:
   Inorder to avoid system cache properties from conflicting with custom cache 
config properties, I am thinking it would be better to make this 
`Property.TSERV_PREFIX + "cache.config."`.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Accumulo-Master - Build # 2188 - Failure

2017-11-30 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-Master (build #2188)

Status: Failure

Check console output at https://builds.apache.org/job/Accumulo-Master/2188/ to 
view the results.

[jira] [Resolved] (ACCUMULO-4669) RFile can create very large blocks when key statistics are not uniform

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner resolved ACCUMULO-4669.

Resolution: Fixed

> RFile can create very large blocks when key statistics are not uniform
> --
>
> Key: ACCUMULO-4669
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4669
> Project: Accumulo
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.7.2, 1.7.3, 1.8.0, 1.8.1
>Reporter: Adam Fuchs
>Assignee: Keith Turner
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.7.4, 1.8.2, 2.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> RFile.Writer.append checks for giant keys and avoid writing them as index 
> blocks. This check is flawed and can result in multi-GB blocks. In our case, 
> a 20GB compressed RFile had one block with over 2GB raw size. This happened 
> because the key size statistics changed after some point in the file. The 
> code in question follows:
> {code}
> private boolean isGiantKey(Key k) {
>   // consider a key thats more than 3 standard deviations from previously 
> seen key sizes as giant
>   return k.getSize() > keyLenStats.getMean() + 
> keyLenStats.getStandardDeviation() * 3;
> }
> ...
>   if (blockWriter == null) {
> blockWriter = fileWriter.prepareDataBlock();
>   } else if (blockWriter.getRawSize() > blockSize) {
> ...
> if ((prevKey.getSize() <= avergageKeySize || blockWriter.getRawSize() 
> > maxBlockSize) && !isGiantKey(prevKey)) {
>   closeBlock(prevKey, false);
> ...
> {code}
> Before closing a block that has grown beyond the target block size we check 
> to see that the key is below average in size or that the block is 1.1 times 
> the target block size (maxBlockSize), and we check that the key isn't a 
> "giant" key, or more than 3 standard deviations from the mean of keys seen so 
> far.
> Our RFiles often have one row of data with different column families 
> representing various forward and inverted indexes. This is a table design 
> similar to the WikiSearch example. The first column family in this case had 
> very uniform, relatively small key sizes. This first column family comprised 
> gigabytes of data, split up into roughly 100KB blocks. When we switched to 
> the next column family the keys grew in size, but were still under about 100 
> bytes. The statistics of the first column family had firmly established a 
> smaller mean and tiny standard deviation (approximately 0), and it took over 
> 2GB of larger keys to bring the standard deviation up enough so that keys 
> were no longer considered "giant" and the block could be closed.
> Now that we're aware, we see large blocks (more than 10x the target block 
> size) in almost every RFile we write. This only became a glaring problem when 
> we got OOM exceptions trying to decompress the block, but it also shows up in 
> a number of subtle performance problems, like high variance in latencies for 
> looking up particular keys.
> The fix for this should produce bounded RFile block sizes, limited to the 
> greater of 2x the maximum key/value size in the block and some configurable 
> threshold, such as 1.1 times the compressed block size. We need a firm cap to 
> be able to reason about memory usage in various applications.
> The following code produces arbitrarily large RFile blocks:
> {code}
>   FileSKVWriter writer = RFileOperations.getInstance().openWriter(filename, 
> fs, conf, acuconf);
>   writer.startDefaultLocalityGroup();
>   SummaryStatistics keyLenStats = new SummaryStatistics();
>   Random r = new Random();
>   byte [] buffer = new byte[minRowSize]; 
>   for(int i = 0; i < 10; i++) {
> byte [] valBytes = new byte[valLength];
> r.nextBytes(valBytes);
> r.nextBytes(buffer);
> ByteBuffer.wrap(buffer).putInt(i);
> Key k = new Key(buffer, 0, buffer.length, emptyBytes, 0, 0, emptyBytes, 
> 0, 0, emptyBytes, 0, 0, 0);
> Value v = new Value(valBytes);
> writer.append(k, v);
> keyLenStats.addValue(k.getSize());
> int newBufferSize = Math.max(buffer.length, (int) 
> Math.ceil(keyLenStats.getMean() + keyLenStats.getStandardDeviation() * 4 + 
> 0.0001));
> buffer = new byte[newBufferSize];
> if(keyLenStats.getSum() > targetSize)
>   break;
>   }
>   writer.close();
> {code}
> One telltale symptom of this bug is an OutOfMemoryException thrown from a 
> readahead thread with message "Requested array size exceeds VM limit". This 
> will only happen if the block cache size is big enough to hold the expected 
> raw block size, 2GB in our case. This message is rare, and really only 
> happens when allocating an array of size Integer.MAX_VALUE or 
> Integer.MAX_VALUE-1 on the 

[GitHub] keith-turner closed pull request #293: ACCUMULO-4669 Use windowed statistics in RFile

2017-11-30 Thread GitBox
keith-turner closed pull request #293: ACCUMULO-4669 Use windowed statistics in 
RFile
URL: https://github.com/apache/accumulo/pull/293
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java 
b/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java
index c1931daed3..b0409aff00 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/RFile.java
@@ -73,7 +73,6 @@
 import org.apache.accumulo.core.util.LocalityGroupUtil;
 import org.apache.accumulo.core.util.MutableByteSequence;
 import org.apache.commons.lang.mutable.MutableLong;
-import org.apache.commons.math3.stat.descriptive.SummaryStatistics;
 import org.apache.hadoop.io.Writable;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -403,7 +402,8 @@ public void flushIfNeeded() throws IOException {
 
 private SampleLocalityGroupWriter sample;
 
-private SummaryStatistics keyLenStats = new SummaryStatistics();
+// Use windowed stats to fix ACCUMULO-4669
+private RollingStats keyLenStats = new RollingStats(2017);
 private double avergageKeySize = 0;
 
 LocalityGroupWriter(BlockFileWriter fileWriter, long blockSize, long 
maxBlockSize, LocalityGroupMetadata currentLocalityGroup,
@@ -416,8 +416,9 @@ public void flushIfNeeded() throws IOException {
 }
 
 private boolean isGiantKey(Key k) {
-  // consider a key thats more than 3 standard deviations from previously 
seen key sizes as giant
-  return k.getSize() > keyLenStats.getMean() + 
keyLenStats.getStandardDeviation() * 3;
+  double mean = keyLenStats.getMean();
+  double stddev = keyLenStats.getStandardDeviation();
+  return k.getSize() > mean + Math.max(9 * mean, 4 * stddev);
 }
 
 public void append(Key key, Value value) throws IOException {
diff --git 
a/core/src/main/java/org/apache/accumulo/core/file/rfile/RollingStats.java 
b/core/src/main/java/org/apache/accumulo/core/file/rfile/RollingStats.java
new file mode 100644
index 00..c0c5554889
--- /dev/null
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/RollingStats.java
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to You under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied. See the License for the specific language governing permissions 
and limitations under
+ * the License.
+ */
+package org.apache.accumulo.core.file.rfile;
+
+import org.apache.commons.math3.stat.StatUtils;
+import org.apache.commons.math3.util.FastMath;
+
+/**
+ * This class supports efficient window statistics. Apache commons math3 has a 
class called DescriptiveStatistics that supports windows. DescriptiveStatistics
+ * recomputes the statistics over the entire window each time its requested. 
In a test over 1,000,000 entries with a window size of 1019 that requested stats
+ * for each entry this class took ~50ms and DescriptiveStatistics took 
~6,000ms.
+ *
+ * 
+ * This class may not be as accurate as DescriptiveStatistics. In unit test 
its within 1/1000 of DescriptiveStatistics.
+ */
+class RollingStats {
+  private int position;
+  private double window[];
+
+  private double average;
+  private double variance;
+  private double stddev;
+
+  // indicates if the window is full
+  private boolean windowFull;
+
+  private int recomputeCounter = 0;
+
+  RollingStats(int windowSize) {
+this.windowFull = false;
+this.position = 0;
+this.window = new double[windowSize];
+  }
+
+  /**
+   * @see http://jonisalonen.com/2014/efficient-and-accurate-rolling-standard-deviation/;>Efficient
 and accurate rolling standard deviation
+   */
+  private void update(double newValue, double oldValue, int windowSize) {
+double delta = newValue - oldValue;
+
+double oldAverage = average;
+average = average + delta / windowSize;
+variance += delta * (newValue - average + oldValue - oldAverage) / 
(windowSize - 1);
+stddev = FastMath.sqrt(variance);
+  }
+
+  void addValue(long stat) {
+
+double old = window[position];
+window[position] = stat;
+position++;
+

[GitHub] keith-turner commented on issue #293: ACCUMULO-4669 Use windowed statistics in RFile

2017-11-30 Thread GitBox
keith-turner commented on issue #293: ACCUMULO-4669 Use windowed statistics in 
RFile
URL: https://github.com/apache/accumulo/pull/293#issuecomment-348381904
 
 
   merged in 26e83f05d1448631f0b8a0da1b8671abe9beb922


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ctubbsii commented on issue #43: ACCUMULO-3970

2017-11-30 Thread GitBox
ctubbsii commented on issue #43: ACCUMULO-3970
URL: https://github.com/apache/accumulo/pull/43#issuecomment-348358441
 
 
   @milleruntime I think it's fine to close now. This PR was never intended to 
be merged. It can always be re-opened if there is consensus on including the 
feature in a future version, and the PR is updated with new merge-ready commits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ctubbsii closed pull request #43: ACCUMULO-3970

2017-11-30 Thread GitBox
ctubbsii closed pull request #43: ACCUMULO-3970
URL: https://github.com/apache/accumulo/pull/43
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/main/java/org/apache/accumulo/core/conf/Property.java 
b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
index ef4d87734e..2ba98287dd 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/Property.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
@@ -512,6 +512,9 @@
   @Experimental
   TABLE_VOLUME_CHOOSER("table.volume.chooser", 
"org.apache.accumulo.server.fs.RandomVolumeChooser", PropertyType.CLASSNAME,
   "The class that will be used to select which volume will be used to 
create new files for this table."),
+  @Experimental
+  TABLE_VTI_CLASS("table.vti.class", "", PropertyType.STRING, "The class that 
will be used to transform key-value pairs"
+  + " to different visibilities at scan-time.\nThe class must be a 
subclass of VisibilityTransformingIterator"),
 
   // VFS ClassLoader properties
   
VFS_CLASSLOADER_SYSTEM_CLASSPATH_PROPERTY(AccumuloVFSClassLoader.VFS_CLASSLOADER_SYSTEM_CLASSPATH_PROPERTY,
 "", PropertyType.STRING,
diff --git 
a/core/src/main/java/org/apache/accumulo/core/iterators/system/VisibilityTransformingIterator.java
 
b/core/src/main/java/org/apache/accumulo/core/iterators/system/VisibilityTransformingIterator.java
new file mode 100644
index 00..ef707a8b9e
--- /dev/null
+++ 
b/core/src/main/java/org/apache/accumulo/core/iterators/system/VisibilityTransformingIterator.java
@@ -0,0 +1,204 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.core.iterators.system;
+
+import org.apache.accumulo.core.data.ArrayByteSequence;
+import org.apache.accumulo.core.data.ByteSequence;
+import org.apache.accumulo.core.data.Key;
+import org.apache.accumulo.core.data.PartialKey;
+import org.apache.accumulo.core.data.Range;
+import org.apache.accumulo.core.data.Value;
+import org.apache.accumulo.core.iterators.IteratorEnvironment;
+import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
+import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.hadoop.io.Text;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.AbstractMap;
+import java.util.Collection;
+import java.util.LinkedList;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+
+public abstract class VisibilityTransformingIterator implements 
SortedKeyValueIterator {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(VisibilityTransformingIterator.class);
+
+  private SortedKeyValueIterator source;
+  private LinkedList> sourceKvPairs = new LinkedList<>();
+  private NavigableMap vtiKvPairs = new TreeMap<>();
+
+  private Map.Entry topEntry;
+  private Range seekRange;
+
+  private final Text rowHolder = new Text();
+  private final Text cfHolder = new Text();
+  private final Text cqHolder = new Text();
+
+  @Override
+  public void init(SortedKeyValueIterator source, 
Map options, IteratorEnvironment env) throws IOException {
+this.source = source;
+  }
+
+  @Override
+  public boolean hasTop() {
+return topEntry != null;
+  }
+
+  @Override
+  public Key getTopKey() {
+return topEntry.getKey();
+  }
+
+  @Override
+  public Value getTopValue() {
+return topEntry.getValue();
+  }
+
+  @Override
+  public void next() throws IOException {
+if (sourceKvPairs.isEmpty() && vtiKvPairs.isEmpty()) {
+  consumeSource();
+}
+setTop();
+  }
+
+  private void consumeSource() throws IOException {
+if (!source.hasTop()) {
+  return;
+}
+Key sourceTop = source.getTopKey();
+Key nextKey = sourceTop.followingKey(PartialKey.ROW_COLFAM_COLQUAL);
+while (source.hasTop() && source.getTopKey().compareTo(nextKey) < 0 && 

Accumulo-1.8 - Build # 234 - Fixed

2017-11-30 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-1.8 (build #234)

Status: Fixed

Check console output at https://builds.apache.org/job/Accumulo-1.8/234/ to view 
the results.

Accumulo-1.8 - Build # 233 - Still Failing

2017-11-30 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-1.8 (build #233)

Status: Still Failing

Check console output at https://builds.apache.org/job/Accumulo-1.8/233/ to view 
the results.

[jira] [Assigned] (ACCUMULO-4744) Using RFile API with cache and multiple files hides data

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner reassigned ACCUMULO-4744:
--

Assignee: Keith Turner

> Using RFile API with cache and multiple files hides data
> 
>
> Key: ACCUMULO-4744
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4744
> Project: Accumulo
>  Issue Type: Bug
>Affects Versions: 1.8.0, 1.8.1
>Reporter: Keith Turner
>Assignee: Keith Turner
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.8.2, 2.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Noticed this bug in source code while working on ACCUMULO-4641.  When using 
> the RFile API introduced in 1.8 to read from multiple files with cache 
> enabled, not all data may be seen.  This happens because internally the code 
> gives all input sources the same cache id.  Therefore index and data blocks 
> from multiple files collide in the cache.
> This bug does not happen when reading data through tserver, only the RFile 
> API.
> {code:java}
>   Scanner scanner =
>RFile.newScanner()
>.from(file1, file2, file3)   //multiple input files
>.withFileSystem(localFs)
>.withIndexCache(100)   //enabled cache 
>.withDataCache(1000)  //enabled cache
>.build();
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (ACCUMULO-4744) Using RFile API with cache and multiple files hides data

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner resolved ACCUMULO-4744.

Resolution: Fixed

> Using RFile API with cache and multiple files hides data
> 
>
> Key: ACCUMULO-4744
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4744
> Project: Accumulo
>  Issue Type: Bug
>Affects Versions: 1.8.0, 1.8.1
>Reporter: Keith Turner
>Assignee: Keith Turner
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.8.2, 2.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Noticed this bug in source code while working on ACCUMULO-4641.  When using 
> the RFile API introduced in 1.8 to read from multiple files with cache 
> enabled, not all data may be seen.  This happens because internally the code 
> gives all input sources the same cache id.  Therefore index and data blocks 
> from multiple files collide in the cache.
> This bug does not happen when reading data through tserver, only the RFile 
> API.
> {code:java}
>   Scanner scanner =
>RFile.newScanner()
>.from(file1, file2, file3)   //multiple input files
>.withFileSystem(localFs)
>.withIndexCache(100)   //enabled cache 
>.withDataCache(1000)  //enabled cache
>.build();
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (ACCUMULO-4744) Using RFile API with cache and multiple files hides data

2017-11-30 Thread Keith Turner (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner updated ACCUMULO-4744:
---
Fix Version/s: 2.0.0

> Using RFile API with cache and multiple files hides data
> 
>
> Key: ACCUMULO-4744
> URL: https://issues.apache.org/jira/browse/ACCUMULO-4744
> Project: Accumulo
>  Issue Type: Bug
>Affects Versions: 1.8.0, 1.8.1
>Reporter: Keith Turner
>Assignee: Keith Turner
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.8.2, 2.0.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Noticed this bug in source code while working on ACCUMULO-4641.  When using 
> the RFile API introduced in 1.8 to read from multiple files with cache 
> enabled, not all data may be seen.  This happens because internally the code 
> gives all input sources the same cache id.  Therefore index and data blocks 
> from multiple files collide in the cache.
> This bug does not happen when reading data through tserver, only the RFile 
> API.
> {code:java}
>   Scanner scanner =
>RFile.newScanner()
>.from(file1, file2, file3)   //multiple input files
>.withFileSystem(localFs)
>.withIndexCache(100)   //enabled cache 
>.withDataCache(1000)  //enabled cache
>.build();
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Accumulo-1.8 - Build # 232 - Still Failing

2017-11-30 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-1.8 (build #232)

Status: Still Failing

Check console output at https://builds.apache.org/job/Accumulo-1.8/232/ to view 
the results.

[GitHub] keith-turner closed pull request #324: ACCUMULO-4744 Fixed RFile API scanner bug

2017-11-30 Thread GitBox
keith-turner closed pull request #324: ACCUMULO-4744 Fixed RFile API scanner bug
URL: https://github.com/apache/accumulo/pull/324
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java 
b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
index 4dfba68850..bc0df8253f 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/rfile/RFileScanner.java
@@ -265,7 +265,8 @@ public SamplerConfiguration getSamplerConfiguration() {
   List> readers = new 
ArrayList<>(sources.length);
   for (int i = 0; i < sources.length; i++) {
 FSDataInputStream inputStream = (FSDataInputStream) 
sources[i].getInputStream();
-readers.add(new RFile.Reader(new CachableBlockFile.Reader(inputStream, 
sources[i].getLength(), opts.in.getConf(), dataCache, indexCache,
+
+readers.add(new RFile.Reader(new CachableBlockFile.Reader("source-" + 
i, inputStream, sources[i].getLength(), opts.in.getConf(), dataCache, 
indexCache,
 AccumuloConfiguration.getDefaultConfiguration(;
   }
 
diff --git 
a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java
 
b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java
index 4fa66341f2..3ecb5cafc7 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/CachableBlockFile.java
@@ -147,7 +147,7 @@ public long getStartPos() throws IOException {
   public static class Reader implements BlockFileReader {
 private final RateLimiter readLimiter;
 private BCFile.Reader _bc;
-private String fileName = "not_available";
+private final String fileName;
 private BlockCache _dCache = null;
 private BlockCache _iCache = null;
 private InputStream fin = null;
@@ -251,16 +251,18 @@ public Reader(FileSystem fs, Path dataFile, Configuration 
conf, BlockCache data,
   this.readLimiter = readLimiter;
 }
 
-public  
Reader(InputStreamType fsin, long len, Configuration conf, BlockCache data, 
BlockCache index,
-AccumuloConfiguration accumuloConfiguration) throws IOException {
+public  Reader(String 
cacheId, InputStreamType fsin, long len, Configuration conf, BlockCache data,
+BlockCache index, AccumuloConfiguration accumuloConfiguration) throws 
IOException {
+  this.fileName = cacheId;
   this._dCache = data;
   this._iCache = index;
   this.readLimiter = null;
   init(fsin, len, conf, accumuloConfiguration);
 }
 
-public  
Reader(InputStreamType fsin, long len, Configuration conf,
+public  Reader(String 
cacheId, InputStreamType fsin, long len, Configuration conf,
 AccumuloConfiguration accumuloConfiguration) throws IOException {
+  this.fileName = cacheId;
   this.readLimiter = null;
   init(fsin, len, conf, accumuloConfiguration);
 }
diff --git 
a/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileTest.java 
b/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileTest.java
index 4993810b3a..8748d8c57b 100644
--- a/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileTest.java
+++ b/core/src/test/java/org/apache/accumulo/core/client/rfile/RFileTest.java
@@ -25,6 +25,7 @@
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.Iterator;
+import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
 import java.util.Random;
@@ -623,4 +624,27 @@ private Reader getReader(LocalFileSystem localFs, String 
testFile) throws IOExce
 
.withTableConfiguration(AccumuloConfiguration.getDefaultConfiguration()).build();
 return reader;
   }
+
+  @Test
+  public void testMultipleFilesAndCache() throws Exception {
+SortedMap testData = createTestData(100, 10, 10);
+List files = Arrays.asList(createTmpTestFile(), 
createTmpTestFile(), createTmpTestFile());
+
+LocalFileSystem localFs = FileSystem.getLocal(new Configuration());
+
+for (int i = 0; i < files.size(); i++) {
+  try (RFileWriter writer = 
RFile.newWriter().to(files.get(i)).withFileSystem(localFs).build()) {
+for (Entry entry : testData.entrySet()) {
+  if (entry.getKey().hashCode() % files.size() == i) {
+writer.append(entry.getKey(), entry.getValue());
+  }
+}
+  }
+}
+
+Scanner scanner = RFile.newScanner().from(files.toArray(new 

[GitHub] keith-turner commented on issue #324: ACCUMULO-4744 Fixed RFile API scanner bug

2017-11-30 Thread GitBox
keith-turner commented on issue #324: ACCUMULO-4744 Fixed RFile API scanner bug
URL: https://github.com/apache/accumulo/pull/324#issuecomment-348317934
 
 
   Merged in ed313f7


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] keith-turner commented on a change in pull request #325: ACCUMULO-2341?

2017-11-30 Thread GitBox
keith-turner commented on a change in pull request #325: ACCUMULO-2341?
URL: https://github.com/apache/accumulo/pull/325#discussion_r154186590
 
 

 ##
 File path: server/base/src/main/java/org/apache/accumulo/server/util/Admin.java
 ##
 @@ -354,12 +354,8 @@ public void run() {
   }
 
   private static void stopServer(final ClientContext context, final boolean 
tabletServersToo) throws AccumuloException, AccumuloSecurityException {
-MasterClient.executeVoid(context, new 
ClientExec() {
-  @Override
-  public void execute(MasterClientService.Client client) throws Exception {
-client.shutdown(Tracer.traceInfo(), context.rpcCreds(), 
tabletServersToo);
-  }
-});
+MasterClient.executeVoidWithConnRetry(context,
 
 Review comment:
   I am thinking we may want to provide a way to not retry at all.  I don't 
think retrying twice vs once offers much benefit.  If it fails the first time 
it will likely fail the second if retrying immediately.
   
   I have been looking at the code and there is a retry loop in the 
`executeGeneric()` in addition to the retry loop in `getConnectionWithRetry()`. 
 I am thinking this code should call an `executeVoid()` method that has no 
retry loops. Maybe something like the following.
   
   ```java
 public static void executeVoid(ClientContext context, 
ClientExec exec, boolean retry)
 throws AccumuloException, AccumuloSecurityException {
   if (retry) {
 executeVoid(context, exec);
   } else {
 MasterClientService.Client client = null;
   
 try {
   // TODO this gets a connection without a timeout
   client = getConnection(context);
   if (client == null) {
 throw new AccumuloException("Failed to connect to master " + 
context.getInstance().getMasterLocations());
   }
   exec.execute(client);
 } catch (ThriftSecurityException e) {
   throw new AccumuloSecurityException(e.user, e.code, e);
 } catch (AccumuloException e) {
   throw e;
 } catch (RuntimeException e) {
   throw e;
 } catch (Exception e) {
   throw new AccumuloException(e);
 } finally {
   if (client != null)
 close(client);
 }
   }
 }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (ACCUMULO-3970) Generating multiple views of a value at scan time

2017-11-30 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/ACCUMULO-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated ACCUMULO-3970:
-
Labels: pull-request-available  (was: )

> Generating multiple views of a value at scan time
> -
>
> Key: ACCUMULO-3970
> URL: https://issues.apache.org/jira/browse/ACCUMULO-3970
> Project: Accumulo
>  Issue Type: New Feature
>Reporter: Russ Weeks
>Priority: Minor
>  Labels: pull-request-available
>
> It would be useful to have the ability to generate different representations 
> of a key-value pair at scan time, based on the scan authorizations.
> For example, consider [HIPPA safe harbour 
> de-identification|http://www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/De-identification/guidance.html#dates].
>  One of the rules for de-identifying a patient's date of birth is that if a 
> patient is 89 years old or younger, you can disclose his exact year of birth. 
> If a patient is 90 years old or over, you pretend that he's 90 years old.
> You can imagine implementing this as a key/value mapping in accumulo like,
> {{(pt_id, demographic, pt_dob, PII_DOB) -> "1925-08-22"}}
> {{(pt_id, demographic, pt_dob, SHD_DOB) -> "1925"}}
> Where the value corresponding to visibility SHD_DOB is produced at scan-time, 
> depending on the patient's current age.
> Another example would be the ability to produce a salted hash of a unique 
> identifier like a social security number or medical record number, where the 
> salt (or the hash algorithm, or the work factor...) could be specified 
> dynamically without having to re-code all the values in the system.
> More broadly speaking, this feature would give organizations more flexibility 
> to change how they deidentify, transform or anonymize data to suit different 
> access levels.
> Of course, to do this you'd need to have a pluggable component that can 
> process key/value pairs before visibilities are evaluated. I can see why this 
> might give a lot of people the heeby-jeebies but I'd like to gather as much 
> feedback as possible. Looking forward to hearing your thoughts!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] milleruntime commented on issue #43: ACCUMULO-3970

2017-11-30 Thread GitBox
milleruntime commented on issue #43: ACCUMULO-3970
URL: https://github.com/apache/accumulo/pull/43#issuecomment-348297853
 
 
   If no one objects over the next week, I will close this pull request.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Accumulo-1.7 - Build # 391 - Failure

2017-11-30 Thread Apache Jenkins Server
The Apache Jenkins build system has built Accumulo-1.7 (build #391)

Status: Failure

Check console output at https://builds.apache.org/job/Accumulo-1.7/391/ to view 
the results.

[GitHub] keith-turner closed pull request #322: ACCUMULO-4740 Enable GCM mode for crypto

2017-11-30 Thread GitBox
keith-turner closed pull request #322: ACCUMULO-4740 Enable GCM mode for crypto
URL: https://github.com/apache/accumulo/pull/322
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java 
b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
index baf1818bff..f787d5e7f2 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java
@@ -81,6 +81,8 @@ else if (!prop.getType().isValidFormat(value))
 
   if (key.equals(Property.CRYPTO_CIPHER_SUITE.getKey())) {
 cipherSuite = Objects.requireNonNull(value);
+Preconditions.checkArgument(cipherSuite.equals("NullCipher") || 
cipherSuite.split("/").length == 3,
+"Cipher suite must be NullCipher or in the form 
algorithm/mode/padding. Suite: " + cipherSuite + " is invalid.");
   }
 
   if (key.equals(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey())) {
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/Property.java 
b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
index 48a4796948..b885124dfd 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/Property.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/Property.java
@@ -49,14 +49,13 @@
   + "(future) other parts of the code."),
   @Experimental
   CRYPTO_CIPHER_SUITE("crypto.cipher.suite", "NullCipher", PropertyType.STRING,
-  "Describes the cipher suite to use for rfile encryption. If a WAL cipher 
suite is not set, it will default to this value. The suite should be in the "
-  + "form of algorithm/mode/padding, e.g. AES/CBC/NoPadding"),
+  "Describes the cipher suite to use for rfile encryption. The value must 
be either NullCipher or in the form of algorithm/mode/padding, "
+  + "e.g. AES/CBC/NoPadding"),
   @Experimental
-  CRYPTO_WAL_CIPHER_SUITE(
-  "crypto.wal.cipher.suite",
-  "NullCipher",
-  PropertyType.STRING,
-  "Describes the cipher suite to use for the write-ahead log. Defaults to 
'cyrpto.cipher.suite' and will use that value for WAL encryption unless 
otherwise specified."),
+  CRYPTO_WAL_CIPHER_SUITE("crypto.wal.cipher.suite", "", PropertyType.STRING,
+  "Describes the cipher suite to use for the write-ahead log. Defaults to 
'cyrpto.cipher.suite' "
+  + "and will use that value for WAL encryption unless otherwise 
specified. Valid suite values include: an empty string, NullCipher, or a string 
the "
+  + "form of algorithm/mode/padding, e.g. AES/CBC/NOPadding"),
   @Experimental
   CRYPTO_CIPHER_KEY_ALGORITHM_NAME("crypto.cipher.key.algorithm.name", 
"NullCipher", PropertyType.STRING,
   "States the name of the algorithm used for the key for the corresponding 
cipher suite. The key type must be compatible with the cipher suite."),
diff --git 
a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java 
b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java
index f9a61a7e38..e74558321a 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/BCFile.java
@@ -160,9 +160,8 @@ public WBlockState(Algorithm compressionAlgo, 
PositionedDataOutputStream fsOut,
 // *This* is also very important. We don't want the underlying stream 
messed with.
 cryptoParams.setRecordParametersToStream(false);
 
-// It is also important to make sure we get a new initialization 
vector on every call in here,
-// so set any existing one to null, in case we're reusing a parameters 
object for its RNG or other bits
-cryptoParams.setInitializationVector(null);
+// Create a new IV for the block or update an existing one in the case 
of GCM
+cryptoParams.updateInitializationVector();
 
 // Initialize the cipher including generating a new IV
 cryptoParams = cryptoModule.initializeCipher(cryptoParams);
diff --git 
a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
 
b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
index 10535e8312..fd210b159d 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
@@ -569,6 +569,52 @@ public void setBlockStreamSize(int blockStreamSize) {
 this.blockStreamSize = blockStreamSize;
   }
 
+  /**
+   * Returns the mode from the cipher suite. Assumes the suite is in the form 
of algorithm/mode/padding, 

[GitHub] jkrdev commented on issue #328: ACCUMULO-4743 Replaced general custom with tserver prefix for cache config

2017-11-30 Thread GitBox
jkrdev commented on issue #328: ACCUMULO-4743 Replaced general custom with 
tserver prefix for cache config
URL: https://github.com/apache/accumulo/pull/328#issuecomment-348284820
 
 
   @keith-turner If you could test this, now that it builds, would be grateful 
thanks. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] milleruntime commented on issue #140: ACCUMULO-4419: Change how compression delegation works

2017-11-30 Thread GitBox
milleruntime commented on issue #140: ACCUMULO-4419: Change how compression 
delegation works
URL: https://github.com/apache/accumulo/pull/140#issuecomment-348241958
 
 
   OK sounds good.  If you don't get to it over the next week, maybe someone 
else can test it.
   
   @keith-turner were all your questions addressed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bfach10 commented on a change in pull request #326: ACCUMULO-4745 Fixed broken links in tables table on monitor

2017-11-30 Thread GitBox
bfach10 commented on a change in pull request #326: ACCUMULO-4745 Fixed broken 
links in tables table on monitor
URL: https://github.com/apache/accumulo/pull/326#discussion_r154101209
 
 

 ##
 File path: 
server/monitor/src/main/java/org/apache/accumulo/monitor/rest/tables/TablesResource.java
 ##
 @@ -112,6 +112,7 @@ private static TablesList generateTables(TablesList 
tableNamespace) {
 for (Entry entry : 
Tables.getNameToIdMap(HdfsZooInstance.getInstance()).entrySet()) {
   String tableName = entry.getKey();
   Table.ID tableId = entry.getValue();
+  String canonicalTableId = tableId.canonicalID();
 
 Review comment:
   @ctubbsii I implemented this, keeping `TablesResource` the way it was and 
having `TableInformation` still take the `Table.ID` object as a parameter, but 
the constructor takes the `canonicalID` from it to store.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] phrocker commented on issue #140: ACCUMULO-4419: Change how compression delegation works

2017-11-30 Thread GitBox
phrocker commented on issue #140: ACCUMULO-4419: Change how compression 
delegation works
URL: https://github.com/apache/accumulo/pull/140#issuecomment-348205514
 
 
   @milleruntime it was finished, but not merged. If I have an opportunity to 
make sure it works I'll merge. Thanks. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services