Accumulo-Master - Build # 2262 - Unstable
The Apache Jenkins build system has built Accumulo-Master (build #2262) Status: Unstable Check console output at https://builds.apache.org/job/Accumulo-Master/2262/ to view the results.
Accumulo-Master - Build # 2261 - Still Failing
The Apache Jenkins build system has built Accumulo-Master (build #2261) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Master/2261/ to view the results.
Accumulo-1.8 - Build # 265 - Fixed
The Apache Jenkins build system has built Accumulo-1.8 (build #265) Status: Fixed Check console output at https://builds.apache.org/job/Accumulo-1.8/265/ to view the results.
Accumulo-Master - Build # 2260 - Still Failing
The Apache Jenkins build system has built Accumulo-Master (build #2260) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Master/2260/ to view the results.
[jira] [Commented] (ACCUMULO-4803) Remove automatic KEK creation if SecretKeyEncryptionStrategies cannot find KEK file
[ https://issues.apache.org/jira/browse/ACCUMULO-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364910#comment-16364910 ] Christopher Tubbs commented on ACCUMULO-4803: - This only applies to the CachingHDFSSecretKeyEncryptionStrategy and maybe the NonCachingSecretKeyEncryptionStrategy. These are insecure example strategies that nobody should really be using anyway. We can certainly change this behavior in them, but we could just as easily provide better implementations entirely to be available for use. > Remove automatic KEK creation if SecretKeyEncryptionStrategies cannot find > KEK file > --- > > Key: ACCUMULO-4803 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4803 > Project: Accumulo > Issue Type: Improvement >Reporter: Nick Felts >Assignee: Nick Felts >Priority: Minor > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Accumulo-1.8 - Build # 264 - Still Failing
The Apache Jenkins build system has built Accumulo-1.8 (build #264) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-1.8/264/ to view the results.
[jira] [Updated] (ACCUMULO-4812) Dependency Conflict: different Jars contain the incompatible classes with the same name, which leads to NoSuchMethodException
[ https://issues.apache.org/jira/browse/ACCUMULO-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Tubbs updated ACCUMULO-4812: Labels: (was: releasenotes) > Dependency Conflict: different Jars contain the incompatible classes with the > same name, which leads to NoSuchMethodException > - > > Key: ACCUMULO-4812 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4812 > Project: Accumulo > Issue Type: Bug > Components: core >Affects Versions: 2.0.0 >Reporter: PandaMonkey >Assignee: Christopher Tubbs >Priority: Major > Attachments: Conflicting details.txt > > > Hi, by analyzing the accumulo-core:2.0.0-SNAPSHOT > accumulo-master\core\pom.xml file, we found that several duplicate classes > exist in different JARs: "commons-beanutils:commons-*beanutils-core*:1.8.0" > and "commons-beanutils:commons-*beanutils*:1.7.0". As the JVM only load the > classes present first on the classpath and shadow the other duplicate ones > with the same name. It would throw the "*NoSuchMethodException*" or > "*NoSuchMethodError*" if the duplicate classes are inconsistent! So we spend > some energy to scan the different features between these duplicate classes. > The conflicting details are listed in the attachment. Please pay attention to > it. Hope our report can help you. Thanks :). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (ACCUMULO-4812) Dependency Conflict: different Jars contain the incompatible classes with the same name, which leads to NoSuchMethodException
[ https://issues.apache.org/jira/browse/ACCUMULO-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Tubbs resolved ACCUMULO-4812. - Resolution: Not A Problem Assignee: Christopher Tubbs Fix Version/s: (was: 2.0.0) I analyzed the dependency tree for how these two jars are being used by Accumulo. We are not using them directly at all. They are both being brought in through the use of commons-configuration at compile time. commons-configuration has a direct dependency on commons-beanutils-core:1.8.0 and also an indirect dependency (via commons-digester:1.8) on commons-beanutils:1.7.0. Since we're not using it directly, and we haven't seen a break as a result of this, I must conclude that whatever code paths we're using in commons-configuration, are such that it does not matter. However, it could still be an issue if any third party code is also using Accumulo's class path and needs one or the other. Further, we're not shipping commons-beanutils at all in our assembly tarball... so if it's being used at all on the class path, it's being brought in through Hadoop or ZooKeeper's class path... and should be addressed by their packaging. > Dependency Conflict: different Jars contain the incompatible classes with the > same name, which leads to NoSuchMethodException > - > > Key: ACCUMULO-4812 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4812 > Project: Accumulo > Issue Type: Bug > Components: core >Affects Versions: 2.0.0 >Reporter: PandaMonkey >Assignee: Christopher Tubbs >Priority: Major > Labels: releasenotes > Attachments: Conflicting details.txt > > > Hi, by analyzing the accumulo-core:2.0.0-SNAPSHOT > accumulo-master\core\pom.xml file, we found that several duplicate classes > exist in different JARs: "commons-beanutils:commons-*beanutils-core*:1.8.0" > and "commons-beanutils:commons-*beanutils*:1.7.0". As the JVM only load the > classes present first on the classpath and shadow the other duplicate ones > with the same name. It would throw the "*NoSuchMethodException*" or > "*NoSuchMethodError*" if the duplicate classes are inconsistent! So we spend > some energy to scan the different features between these duplicate classes. > The conflicting details are listed in the attachment. Please pay attention to > it. Hope our report can help you. Thanks :). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Accumulo-1.8 - Build # 263 - Still Failing
The Apache Jenkins build system has built Accumulo-1.8 (build #263) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-1.8/263/ to view the results.
[jira] [Commented] (ACCUMULO-4812) Dependency Conflict: different Jars contain the incompatible classes with the same name, which leads to NoSuchMethodException
[ https://issues.apache.org/jira/browse/ACCUMULO-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364829#comment-16364829 ] Christopher Tubbs commented on ACCUMULO-4812: - Thanks for the notice! At the very least, we should caution our users to be careful about how they construct their class path for production. > Dependency Conflict: different Jars contain the incompatible classes with the > same name, which leads to NoSuchMethodException > - > > Key: ACCUMULO-4812 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4812 > Project: Accumulo > Issue Type: Bug > Components: core >Affects Versions: 2.0.0 >Reporter: PandaMonkey >Priority: Major > Labels: releasenotes > Fix For: 2.0.0 > > Attachments: Conflicting details.txt > > > Hi, by analyzing the accumulo-core:2.0.0-SNAPSHOT > accumulo-master\core\pom.xml file, we found that several duplicate classes > exist in different JARs: "commons-beanutils:commons-*beanutils-core*:1.8.0" > and "commons-beanutils:commons-*beanutils*:1.7.0". As the JVM only load the > classes present first on the classpath and shadow the other duplicate ones > with the same name. It would throw the "*NoSuchMethodException*" or > "*NoSuchMethodError*" if the duplicate classes are inconsistent! So we spend > some energy to scan the different features between these duplicate classes. > The conflicting details are listed in the attachment. Please pay attention to > it. Hope our report can help you. Thanks :). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (ACCUMULO-4812) Dependency Conflict: different Jars contain the incompatible classes with the same name, which leads to NoSuchMethodException
[ https://issues.apache.org/jira/browse/ACCUMULO-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Tubbs updated ACCUMULO-4812: Labels: releasenotes (was: ) > Dependency Conflict: different Jars contain the incompatible classes with the > same name, which leads to NoSuchMethodException > - > > Key: ACCUMULO-4812 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4812 > Project: Accumulo > Issue Type: Bug > Components: core >Affects Versions: 2.0.0 >Reporter: PandaMonkey >Priority: Major > Labels: releasenotes > Fix For: 2.0.0 > > Attachments: Conflicting details.txt > > > Hi, by analyzing the accumulo-core:2.0.0-SNAPSHOT > accumulo-master\core\pom.xml file, we found that several duplicate classes > exist in different JARs: "commons-beanutils:commons-*beanutils-core*:1.8.0" > and "commons-beanutils:commons-*beanutils*:1.7.0". As the JVM only load the > classes present first on the classpath and shadow the other duplicate ones > with the same name. It would throw the "*NoSuchMethodException*" or > "*NoSuchMethodError*" if the duplicate classes are inconsistent! So we spend > some energy to scan the different features between these duplicate classes. > The conflicting details are listed in the attachment. Please pay attention to > it. Hope our report can help you. Thanks :). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Accumulo-1.8 - Build # 262 - Still Failing
The Apache Jenkins build system has built Accumulo-1.8 (build #262) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-1.8/262/ to view the results.
[GitHub] ctubbsii commented on a change in pull request #381: ACCUMULO-4413 Avoid Copy/Sort column on WAL recovery exceeds 100%
ctubbsii commented on a change in pull request #381: ACCUMULO-4413 Avoid Copy/Sort column on WAL recovery exceeds 100% URL: https://github.com/apache/accumulo/pull/381#discussion_r168320345 ## File path: server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java ## @@ -242,7 +244,9 @@ public void startWatchingForRecoveryLogs(ThreadPoolExecutor distWorkQThreadPool) RecoveryStatus status = new RecoveryStatus(); status.name = entries.getKey(); try { - status.progress = entries.getValue().getBytesCopied() / (0.0 + conf.getAsBytes(Property.TSERV_WALOG_MAX_SIZE)); + double progress = entries.getValue().getBytesCopied() / walBlockSize; + // to be sure progress does not exceed 100% + status.progress = Math.max(progress, 99.0); Review comment: I think this keeps the status permanently at 99%. I think we want something like `Math.min(progress, 99.9)` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] keith-turner closed pull request #376: ACCUMULO-4798 optimized stat in ZooCache
keith-turner closed pull request #376: ACCUMULO-4798 optimized stat in ZooCache URL: https://github.com/apache/accumulo/pull/376 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java index 801ee2cef3..6fa4967b01 100644 --- a/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java +++ b/fate/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java @@ -18,11 +18,6 @@ import static java.nio.charset.StandardCharsets.UTF_8; -import java.io.ByteArrayInputStream; -import java.io.ByteArrayOutputStream; -import java.io.DataInputStream; -import java.io.DataOutputStream; -import java.io.IOException; import java.util.Collections; import java.util.ConcurrentModificationException; import java.util.HashMap; @@ -60,14 +55,39 @@ private final Lock cacheReadLock = cacheLock.readLock(); private final HashMapcache; - private final HashMap statCache; + private final HashMap statCache; private final HashMap childrenCache; private final ZooReader zReader; + public static class ZcStat { +private long ephemeralOwner; + +public ZcStat() { + +} + +private ZcStat(Stat stat) { + this.ephemeralOwner = stat.getEphemeralOwner(); +} + +public long getEphemeralOwner() { + return ephemeralOwner; +} + +private void set(ZcStat cachedStat) { + this.ephemeralOwner = cachedStat.ephemeralOwner; +} + +@VisibleForTesting +public void setEphemeralOwner(long ephemeralOwner) { + this.ephemeralOwner = ephemeralOwner; +} + } + private static class ImmutableCacheCopies { final Map cache; -final Map statCache; +final Map statCache; final Map childrenCache; ImmutableCacheCopies() { @@ -76,7 +96,7 @@ childrenCache = Collections.emptyMap(); } -ImmutableCacheCopies(Map cache, Map statCache, Map childrenCache) { +ImmutableCacheCopies(Map cache, Map statCache, Map childrenCache) { this.cache = Collections.unmodifiableMap(new HashMap<>(cache)); this.statCache = Collections.unmodifiableMap(new HashMap<>(statCache)); this.childrenCache = Collections.unmodifiableMap(new HashMap<>(childrenCache)); @@ -88,7 +108,7 @@ this.childrenCache = Collections.unmodifiableMap(new HashMap<>(childrenCache)); } -ImmutableCacheCopies(Map cache, Map statCache, ImmutableCacheCopies prev) { +ImmutableCacheCopies(Map cache, Map statCache, ImmutableCacheCopies prev) { this.cache = Collections.unmodifiableMap(new HashMap<>(cache)); this.statCache = Collections.unmodifiableMap(new HashMap<>(statCache)); this.childrenCache = prev.childrenCache; @@ -322,20 +342,20 @@ public T retry() { * status object to populate * @return path data, or null if non-existent */ - public byte[] get(final String zPath, final Stat status) { + public byte[] get(final String zPath, final ZcStat status) { ZooRunnable zr = new ZooRunnable () { @Override public byte[] run() throws KeeperException, InterruptedException { -Stat stat = null; +ZcStat zstat = null; // only read volatile once so following code works with a consistent snapshot ImmutableCacheCopies lic = immutableCache; byte[] val = lic.cache.get(zPath); if (val != null || lic.cache.containsKey(zPath)) { if (status != null) { -stat = lic.statCache.get(zPath); -copyStats(status, stat); +zstat = lic.statCache.get(zPath); +copyStats(status, zstat); } return val; } @@ -348,7 +368,7 @@ public T retry() { cacheWriteLock.lock(); try { final ZooKeeper zooKeeper = getZooKeeper(); - stat = zooKeeper.exists(zPath, watcher); + Stat stat = zooKeeper.exists(zPath, watcher); byte[] data = null; if (stat == null) { if (log.isTraceEnabled()) { @@ -357,6 +377,7 @@ public T retry() { } else { try { data = zooKeeper.getData(zPath, watcher, stat); + zstat = new ZcStat(stat); } catch (KeeperException.BadVersionException e1) { throw new ConcurrentModificationException(); } catch (KeeperException.NoNodeException e2) { @@ -366,8 +387,8 @@ public T retry() {
[GitHub] keith-turner commented on issue #376: ACCUMULO-4798 optimized stat in ZooCache
keith-turner commented on issue #376: ACCUMULO-4798 optimized stat in ZooCache URL: https://github.com/apache/accumulo/pull/376#issuecomment-365755449 Merged in eb15e45 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
Accumulo-1.8 - Build # 261 - Still Failing
The Apache Jenkins build system has built Accumulo-1.8 (build #261) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-1.8/261/ to view the results.
[jira] [Updated] (ACCUMULO-4788) Improve Thrift Transport pool
[ https://issues.apache.org/jira/browse/ACCUMULO-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated ACCUMULO-4788: - Labels: pull-request-available (was: ) > Improve Thrift Transport pool > - > > Key: ACCUMULO-4788 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4788 > Project: Accumulo > Issue Type: Improvement >Reporter: Keith Turner >Assignee: Keith Turner >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > > Accumulo has a pool of recently opened connections to tablet servers. When > connecting to tablet servers, this pool is checked first. The pool is built > around a map of list. There are two problems with this pool : > * It has global lock around the map of list > * When trying to find a connection it does a linear search for a non > reserved connection (this is per tablet server) > Could possibly move to a model of having a list of unreserved connections and > a set of reserved connections per tablet server. Then to get a connection, > could remove from the unreserved list and add to the reserved set. This > would be a constant time operation. > For the locking, could move to a model of using a concurrent map and locking > per tserver instead of locking the entire map. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] keith-turner closed pull request #385: ACCUMULO-4788
keith-turner closed pull request #385: ACCUMULO-4788 URL: https://github.com/apache/accumulo/pull/385 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportKey.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportKey.java index f1ab501b0a..dcae49cca3 100644 --- a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportKey.java +++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportKey.java @@ -86,6 +86,10 @@ public boolean equals(Object o) { && (!isSasl() || (ttk.isSasl() && saslParams.equals(ttk.saslParams))); } + public final void precomputeHashCode() { +hashCode(); + } + @Override public int hashCode() { if (hash == -1) diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java index 3d36e694a6..c221607328 100644 --- a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java +++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftTransportPool.java @@ -41,6 +41,8 @@ import org.slf4j.LoggerFactory; import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Preconditions; +import com.google.common.collect.Iterables; public class ThriftTransportPool { private static SecurityPermission TRANSPORT_POOL_PERMISSION = new SecurityPermission("transportPoolPermission"); @@ -48,7 +50,26 @@ private static final Random random = new Random(); private long killTime = 1000 * 3; - private Mapcache = new HashMap<>(); + private static class CachedConnections { +LinkedList unreserved = new LinkedList<>(); +Map reserved = new HashMap<>(); + +public CachedConnection reserveAny() { + if (unreserved.size() > 0) { +CachedConnection cachedConnection = unreserved.removeFirst(); +cachedConnection.reserve(); +reserved.put(cachedConnection.transport, cachedConnection); +if (log.isTraceEnabled()) { + log.trace("Using existing connection to {}", cachedConnection.transport.cacheKey); +} +return cachedConnection; + } + + return null; +} + } + + private Map cache = new HashMap<>(); private Map errorCount = new HashMap<>(); private Map errorTime = new HashMap<>(); private Set serversWarnedAbout = new HashSet<>(); @@ -66,15 +87,17 @@ public CachedConnection(CachedTTransport t) { this.transport = t; } -void setReserved(boolean reserved) { - this.transport.setReserved(reserved); +void reserve() { + Preconditions.checkState(!this.transport.reserved); + this.transport.setReserved(true); } -boolean isReserved() { - return this.transport.reserved; +void unreserve() { + Preconditions.checkState(this.transport.reserved); + this.transport.setReserved(false); } -CachedTTransport transport; +final CachedTTransport transport; long lastReturnTime; } @@ -98,20 +121,18 @@ private void closeConnections() { ArrayList connectionsToClose = new ArrayList<>(); synchronized (pool) { - for (List ccl : pool.getCache().values()) { -Iterator iter = ccl.iterator(); + for (CachedConnections cachedConns : pool.getCache().values()) { +Iterator iter = cachedConns.unreserved.iterator(); while (iter.hasNext()) { CachedConnection cachedConnection = iter.next(); - if (!cachedConnection.isReserved() && System.currentTimeMillis() - cachedConnection.lastReturnTime > pool.killTime) { + if (System.currentTimeMillis() - cachedConnection.lastReturnTime > pool.killTime) { connectionsToClose.add(cachedConnection); iter.remove(); } } - } - for (List ccl : pool.getCache().values()) { -for (CachedConnection cachedConnection : ccl) { +for (CachedConnection cachedConnection : cachedConns.reserved.values()) { cachedConnection.transport.checkForStuckIO(STUCK_THRESHOLD); } } @@ -389,21 +410,21 @@ public TTransport getTransport(HostAndPort location, long milliseconds, ClientCo } private TTransport getTransport(ThriftTransportKey cacheKey) throws TTransportException { +// compute hash code outside of lock, this lowers the time the lock is held +cacheKey.precomputeHashCode();
[GitHub] keith-turner closed pull request #384: ACCUMULO-4789 Avoid creating expensive thread name
keith-turner closed pull request #384: ACCUMULO-4789 Avoid creating expensive thread name URL: https://github.com/apache/accumulo/pull/384 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java index 4f38e0cfad..b7b1c6732c 100644 --- a/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java +++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ThriftScanner.java @@ -414,11 +414,12 @@ else if (log.isTraceEnabled()) scanState.prevLoc = loc; if (scanState.scanID == null) { -String msg = "Starting scan tserver=" + loc.tablet_location + " tablet=" + loc.tablet_extent + " range=" + scanState.range + " ssil=" -+ scanState.serverSideIteratorList + " ssio=" + scanState.serverSideIteratorOptions; -Thread.currentThread().setName(msg); + +Thread.currentThread().setName("Starting scan tserver=" + loc.tablet_location + " tableId=" + loc.tablet_extent.getTableId()); if (log.isTraceEnabled()) { + String msg = "Starting scan tserver=" + loc.tablet_location + " tablet=" + loc.tablet_extent + " range=" + scanState.range + " ssil=" + + scanState.serverSideIteratorList + " ssio=" + scanState.serverSideIteratorOptions; log.trace("tid={} {}", Thread.currentThread().getId(), msg); timer = new OpTimer().start(); } This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] keith-turner closed pull request #382: ACCUMULO-4782 switch session manager to a concurrent map
keith-turner closed pull request #382: ACCUMULO-4782 switch session manager to a concurrent map URL: https://github.com/apache/accumulo/pull/382 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/Session.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/Session.java index 1d2d88d4aa..eed45cf073 100644 --- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/Session.java +++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/Session.java @@ -20,10 +20,15 @@ import org.apache.accumulo.server.rpc.TServerUtils; public class Session { + + enum State { +NEW, UNRESERVED, RESERVED, REMOVED + } + public final String client; long lastAccessTime; public long startTime; - boolean reserved; + State state = State.NEW; private final TCredentials credentials; Session(TCredentials credentials) { diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java index bf37855bb4..b04a367561 100644 --- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java +++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java @@ -16,6 +16,7 @@ */ package org.apache.accumulo.tserver.session; +import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import java.util.ArrayList; import java.util.HashMap; @@ -26,6 +27,8 @@ import java.util.Map.Entry; import java.util.Set; import java.util.TimerTask; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; import org.apache.accumulo.core.client.impl.Translator; import org.apache.accumulo.core.client.impl.Translators; @@ -39,18 +42,20 @@ import org.apache.accumulo.server.util.time.SimpleTimer; import org.apache.accumulo.tserver.scan.ScanRunState; import org.apache.accumulo.tserver.scan.ScanTask; +import org.apache.accumulo.tserver.session.Session.State; import org.apache.accumulo.tserver.tablet.ScanBatch; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import com.google.common.base.Preconditions; import com.google.common.collect.Iterables; import com.google.common.collect.Maps; public class SessionManager { private static final Logger log = LoggerFactory.getLogger(SessionManager.class); - private final SecureRandom random = new SecureRandom(); - private final Mapsessions = new HashMap<>(); + private final SecureRandom random; + private final ConcurrentMap sessions = new ConcurrentHashMap<>(); private final long maxIdle; private final long maxUpdateIdle; private final List idleSessions = new ArrayList<>(); @@ -62,6 +67,16 @@ public SessionManager(AccumuloConfiguration conf) { maxUpdateIdle = conf.getTimeInMillis(Property.TSERV_UPDATE_SESSION_MAXIDLE); maxIdle = conf.getTimeInMillis(Property.TSERV_SESSION_MAXIDLE); +SecureRandom sr; +try { + // This is faster than the default secure random which uses /dev/urandom + sr = SecureRandom.getInstance("SHA1PRNG"); +} catch (NoSuchAlgorithmException e) { + log.debug("Unable to create SHA1PRNG secure random, using default"); + sr = new SecureRandom(); +} +random = sr; + Runnable r = new Runnable() { @Override public void run() { @@ -70,20 +85,21 @@ public void run() { }; SimpleTimer.getInstance(conf).schedule(r, 0, Math.max(maxIdle / 2, 1000)); + } - public synchronized long createSession(Session session, boolean reserve) { + public long createSession(Session session, boolean reserve) { long sid = random.nextLong(); -while (sessions.containsKey(sid)) { - sid = random.nextLong(); +synchronized (session) { + Preconditions.checkArgument(session.state == State.NEW); + session.state = reserve ? State.RESERVED : State.UNRESERVED; + session.startTime = session.lastAccessTime = System.currentTimeMillis(); } -sessions.put(sid, session); - -session.reserved = reserve; - -session.startTime = session.lastAccessTime = System.currentTimeMillis(); +while (sessions.putIfAbsent(sid, session) != null) { + sid = random.nextLong(); +} return sid; } @@ -96,56 +112,83 @@ public long getMaxIdleTime() { * while a session is reserved, it cannot be canceled or removed */ - public synchronized Session reserveSession(long sessionId) { + public Session reserveSession(long sessionId) { Session session = sessions.get(sessionId); if (session != null) { - if (session.reserved) -
[GitHub] keith-turner closed pull request #387: ACCUMULO-4801 cache slow to compute fields of client context
keith-turner closed pull request #387: ACCUMULO-4801 cache slow to compute fields of client context URL: https://github.com/apache/accumulo/pull/387 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java b/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java index 9f68d16b00..f54a7a98e1 100644 --- a/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java +++ b/core/src/main/java/org/apache/accumulo/core/client/impl/ClientContext.java @@ -22,6 +22,7 @@ import java.io.IOException; import java.util.Iterator; import java.util.Map; +import java.util.concurrent.TimeUnit; import org.apache.accumulo.core.client.AccumuloException; import org.apache.accumulo.core.client.AccumuloSecurityException; @@ -42,6 +43,8 @@ import org.slf4j.LoggerFactory; import com.google.common.base.Predicate; +import com.google.common.base.Supplier; +import com.google.common.base.Suppliers; /** * This class represents any essential configuration and credentials needed to initiate RPC operations throughout the code. It is intended to represent a shared @@ -59,6 +62,12 @@ private final AccumuloConfiguration rpcConf; protected Connector conn; + // These fields are very frequently accessed (each time a connection is created) and expensive to compute, so cache them. + private Supplier timeoutSupplier; + private Supplier saslSupplier; + private Supplier sslSupplier; + private TCredentials rpcCreds; + /** * Instantiate a client context */ @@ -75,6 +84,43 @@ public ClientContext(Instance instance, Credentials credentials, AccumuloConfigu creds = requireNonNull(credentials, "credentials is null"); rpcConf = requireNonNull(serverConf, "serverConf is null"); clientConf = null; + +timeoutSupplier = new Supplier() { + @Override + public Long get() { +return getConfiguration().getTimeInMillis(Property.GENERAL_RPC_TIMEOUT); + } +}; + +sslSupplier = new Supplier() { + @Override + public SslConnectionParams get() { +return SslConnectionParams.forClient(getConfiguration()); + } +}; + +saslSupplier = new Supplier() { + @Override + public SaslConnectionParams get() { +// Use the clientConf if we have it +if (null != clientConf) { + if (!clientConf.hasSasl()) { +return null; + } + return new SaslConnectionParams(clientConf, getCredentials().getToken()); +} +AccumuloConfiguration conf = getConfiguration(); +if (!conf.getBoolean(Property.INSTANCE_RPC_SASL_ENABLED)) { + return null; +} +return new SaslConnectionParams(conf, getCredentials().getToken()); + } +}; + +timeoutSupplier = Suppliers.memoizeWithExpiration(timeoutSupplier, 100, TimeUnit.MILLISECONDS); +sslSupplier = Suppliers.memoizeWithExpiration(sslSupplier, 100, TimeUnit.MILLISECONDS); +saslSupplier = Suppliers.memoizeWithExpiration(saslSupplier, 100, TimeUnit.MILLISECONDS); + } /** @@ -97,6 +143,7 @@ public synchronized Credentials getCredentials() { public synchronized void setCredentials(Credentials newCredentials) { checkArgument(newCredentials != null, "newCredentials is null"); creds = newCredentials; +rpcCreds = null; } /** @@ -110,32 +157,21 @@ public AccumuloConfiguration getConfiguration() { * Retrieve the universal RPC client timeout from the configuration */ public long getClientTimeoutInMillis() { -return getConfiguration().getTimeInMillis(Property.GENERAL_RPC_TIMEOUT); +return timeoutSupplier.get(); } /** * Retrieve SSL/TLS configuration to initiate an RPC connection to a server */ public SslConnectionParams getClientSslParams() { -return SslConnectionParams.forClient(getConfiguration()); +return sslSupplier.get(); } /** * Retrieve SASL configuration to initiate an RPC connection to a server */ public SaslConnectionParams getSaslParams() { -// Use the clientConf if we have it -if (null != clientConf) { - if (!clientConf.hasSasl()) { -return null; - } - return new SaslConnectionParams(clientConf, getCredentials().getToken()); -} -AccumuloConfiguration conf = getConfiguration(); -if (!conf.getBoolean(Property.INSTANCE_RPC_SASL_ENABLED)) { - return null; -} -return new SaslConnectionParams(conf, getCredentials().getToken()); +return saslSupplier.get(); } /** @@ -158,8 +194,16 @@ public Connector getConnector() throws AccumuloException, AccumuloSecurityExcept /** * Serialize the credentials just before initiating
Accumulo-Pull-Requests - Build # 1048 - Still Failing
The Apache Jenkins build system has built Accumulo-Pull-Requests (build #1048) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Pull-Requests/1048/ to view the results.
[jira] [Commented] (ACCUMULO-4805) Seeing thread contention on FileManager
[ https://issues.apache.org/jira/browse/ACCUMULO-4805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364789#comment-16364789 ] Christopher Tubbs commented on ACCUMULO-4805: - Should this be closed because the PR is merged, or is there more work to be done? > Seeing thread contention on FileManager > --- > > Key: ACCUMULO-4805 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4805 > Project: Accumulo > Issue Type: Bug >Reporter: Keith Turner >Assignee: Keith Turner >Priority: Major > Labels: pull-request-available > Fix For: 1.9.0, 2.0.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Accumulo has a tablet server wide cache of open files. Accessing this cache > obtains a global lock. In profiling, I am seeing contention on this lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (ACCUMULO-4804) Bulk Ingest example is broken
[ https://issues.apache.org/jira/browse/ACCUMULO-4804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Tubbs resolved ACCUMULO-4804. - Resolution: Fixed Fix Version/s: (was: 2.0.0) > Bulk Ingest example is broken > - > > Key: ACCUMULO-4804 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4804 > Project: Accumulo > Issue Type: Bug > Components: examples >Affects Versions: 2.0.0 >Reporter: Michael Miller >Assignee: Michael Miller >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > The bulk import > ([https://github.com/milleruntime/accumulo-examples/blob/master/docs/bulkIngest.md)] > example in the new 2.0 repo does not work on its own. When trying to run > the commands, you get the error: > The following option is required: [-c | --conf] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] keith-turner closed pull request #383: ACCUMULO-4809 Avoid blocking during session cleanup
keith-turner closed pull request #383: ACCUMULO-4809 Avoid blocking during session cleanup URL: https://github.com/apache/accumulo/pull/383 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java index 78c351ded7..a03c0321b8 100644 --- a/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java +++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java @@ -169,36 +169,38 @@ public Session removeSession(long sessionId, boolean unreserve) { } private void sweep(final long maxIdle, final long maxUpdateIdle) { -List sessionsToCleanup = new ArrayList<>(); -synchronized (this) { - Iterator iter = sessions.values().iterator(); - while (iter.hasNext()) { -Session session = iter.next(); -long configuredIdle = maxIdle; -if (session instanceof UpdateSession) { - configuredIdle = maxUpdateIdle; -} -long idleTime = System.currentTimeMillis() - session.lastAccessTime; -if (idleTime > configuredIdle && !session.reserved) { - log.info("Closing idle session from user=" + session.getUser() + ", client=" + session.client + ", idle=" + idleTime + "ms"); - iter.remove(); - sessionsToCleanup.add(session); +// In Accumulo's current code only one thread will ever call this method. However if multiple threads called this method concurrently it could result in +// sessions being lost. This method synchronizes on idleSessions to prevent the loss. Its not expected that anything else will synchronize on idleSessions. +synchronized (idleSessions) { + synchronized (this) { +Iterator iter = sessions.values().iterator(); +while (iter.hasNext()) { + Session session = iter.next(); + long configuredIdle = maxIdle; + if (session instanceof UpdateSession) { +configuredIdle = maxUpdateIdle; + } + long idleTime = System.currentTimeMillis() - session.lastAccessTime; + if (idleTime > configuredIdle && !session.reserved) { +log.info("Closing idle session from user=" + session.getUser() + ", client=" + session.client + ", idle=" + idleTime + "ms"); +iter.remove(); +idleSessions.add(session); + } } } -} - -// do clean up outside of lock for TabletServer in a synchronized block for simplicity vice a synchronized list -synchronized (idleSessions) { + List sessionsToCleanup = new ArrayList<>(); - sessionsToCleanup.addAll(idleSessions); - - idleSessions.clear(); + // do clean up outside of lock for TabletServer + for (Session session : idleSessions) { +if (!session.cleanup()) { + sessionsToCleanup.add(session); +} + } - // perform cleanup for all of the sessions - for (Session session : sessionsToCleanup) { -if (!session.cleanup()) - idleSessions.add(session); + synchronized (this) { +idleSessions.clear(); +idleSessions.addAll(sessionsToCleanup); } } } @@ -234,16 +236,14 @@ public void run() { Mapcounts = new HashMap<>(); Set > copiedIdleSessions = new HashSet<>(); -synchronized (idleSessions) { - /** - * Add sessions so that get the list returned in the active scans call - */ - for (Session session : idleSessions) { -copiedIdleSessions.add(Maps.immutableEntry(expiredSessionMarker, session)); - } +/** + * Add sessions so that get the list returned in the active scans call + */ +for (Session session : idleSessions) { + copiedIdleSessions.add(Maps.immutableEntry(expiredSessionMarker, session)); } -for (Entry entry : sessions.entrySet()) { +for (Entry entry : Iterables.concat(sessions.entrySet(), copiedIdleSessions)) { Session session = entry.getValue(); @SuppressWarnings("rawtypes") @@ -286,13 +286,11 @@ public void run() { final long ct = System.currentTimeMillis(); final Set > copiedIdleSessions = new HashSet<>(); -synchronized (idleSessions) { - /** - * Add sessions so that get the list returned in the active scans call - */ - for (Session session : idleSessions) { -copiedIdleSessions.add(Maps.immutableEntry(expiredSessionMarker, session)); - } +/** + * Add sessions so that get the list returned in the active scans call + */ +for
[GitHub] keith-turner commented on a change in pull request #386: ACCUMULO-4799 removed redundant auth check
keith-turner commented on a change in pull request #386: ACCUMULO-4799 removed redundant auth check URL: https://github.com/apache/accumulo/pull/386#discussion_r168310848 ## File path: server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java ## @@ -270,7 +270,7 @@ public Authorizations getUserAuthorizations(TCredentials credentials) throws Thr } public boolean userHasAuthorizations(TCredentials credentials, List list) throws ThriftSecurityException { -authenticate(credentials); +// Authentication check not done here because this method is always called in conjunction with canScan that does auth check. Review comment: I renamed the method and added javadoc. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] keith-turner commented on issue #384: ACCUMULO-4789 Avoid creating expensive thread name
keith-turner commented on issue #384: ACCUMULO-4789 Avoid creating expensive thread name URL: https://github.com/apache/accumulo/pull/384#issuecomment-365745079 > I know there was some uncertainty with changing this debug information in 1.8... was there something that brought you to a decision? I am just wondering what your final thoughts were... It was the magnitude of the performance impact. For lots of really small lookups I was seeing the code spend 10% to 15% of its time creating this thread name. With this change, it hardly shows up. So my thinking was the thread names still has some useful info, but its much faster to create now. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] keith-turner opened a new pull request #387: ACCUMULO-4801 cache slow to compute fields of client context
keith-turner opened a new pull request #387: ACCUMULO-4801 cache slow to compute fields of client context URL: https://github.com/apache/accumulo/pull/387 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (ACCUMULO-4801) Consider precomputing some client context fields
[ https://issues.apache.org/jira/browse/ACCUMULO-4801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated ACCUMULO-4801: - Labels: pull-request-available (was: ) > Consider precomputing some client context fields > > > Key: ACCUMULO-4801 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4801 > Project: Accumulo > Issue Type: Improvement >Reporter: Keith Turner >Priority: Major > Labels: pull-request-available > Fix For: 2.0.0 > > > Currently each time a connection is requested from the the thrift transport > pool, three methods are called on client context to get ssl, sasl, and > timeout. These in turn call methods on configuration. This is showing up in > profiling as slow. I wonder if these could be precomputed in the client > context constructor. > > Also, repeatedly calling rpcCreds() on client context is showing up as slow. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ctubbsii commented on a change in pull request #386: ACCUMULO-4799 removed redundant auth check
ctubbsii commented on a change in pull request #386: ACCUMULO-4799 removed redundant auth check URL: https://github.com/apache/accumulo/pull/386#discussion_r168306511 ## File path: server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java ## @@ -270,7 +270,7 @@ public Authorizations getUserAuthorizations(TCredentials credentials) throws Thr } public boolean userHasAuthorizations(TCredentials credentials, List list) throws ThriftSecurityException { -authenticate(credentials); +// Authentication check not done here because this method is always called in conjunction with canScan that does auth check. Review comment: I think this is a bit risky... somebody could add some code internally which uses this method without an authentication check (they might overlook the comment). Maybe a method rename, like `alreadyAuthenticatedUserHasAuthorizations` might make it more clear. It's a bit wordy, but my concern is that this method will be misused. Maybe there's another, simpler refactoring which would also avoid risky future misuse? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] milleruntime closed pull request #8: ACCUMULO-4804 Fixes to work with 2.0
milleruntime closed pull request #8: ACCUMULO-4804 Fixes to work with 2.0 URL: https://github.com/apache/accumulo-examples/pull/8 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/docs/bulkIngest.md b/docs/bulkIngest.md index 22bf07c..b856d83 100644 --- a/docs/bulkIngest.md +++ b/docs/bulkIngest.md @@ -24,10 +24,10 @@ test data are created in HDFS. After that the 1000 rows are ingested into accumulo. Then we verify the 1000 rows are in accumulo. $ PKG=org.apache.accumulo.examples.mapreduce.bulk -$ ARGS="-i instance -z zookeepers -u username -p password" +$ ARGS="-c examples.conf" $ accumulo $PKG.SetupTable $ARGS -t test_bulk row_0333 row_0666 $ accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt -$ accumulo-util hadoop-jar target/accumulo-examples.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork +$ accumulo-util hadoop-jar target/accumulo-examples-X.Y.Z.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork $ accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000 For a high level discussion of bulk ingest, see the docs dir. diff --git a/pom.xml b/pom.xml index 834c761..3d426f3 100644 --- a/pom.xml +++ b/pom.xml @@ -33,7 +33,7 @@ Example code and corresponding documentation for using Apache Accumulo -1.8.1 +2.0.0-SNAPSHOT 2.6.4 1.7.21 1.8 diff --git a/src/main/java/org/apache/accumulo/examples/cli/ClientOpts.java b/src/main/java/org/apache/accumulo/examples/cli/ClientOpts.java index e08dfb8..4df1eae 100644 --- a/src/main/java/org/apache/accumulo/examples/cli/ClientOpts.java +++ b/src/main/java/org/apache/accumulo/examples/cli/ClientOpts.java @@ -16,6 +16,7 @@ */ package org.apache.accumulo.examples.cli; +import java.io.File; import java.time.Duration; import org.apache.accumulo.core.client.AccumuloException; @@ -85,12 +86,12 @@ public Long convert(String str) { } } - public static class PropertiesConverter implements IStringConverter { + public static class PropertiesConverter implements IStringConverter { @Override -public Configuration convert(String filename) { +public File convert(String filename) { try { -return new PropertiesConfiguration(filename); - } catch (ConfigurationException e) { +return new File(filename); + } catch (Exception e) { throw new RuntimeException(e); } } @@ -98,14 +99,14 @@ public Configuration convert(String filename) { @Parameter(names = {"-c", "--conf"}, required = true, converter = PropertiesConverter.class, description = "Config file for connecting to Accumulo. See README.md for details.") - private Configuration config = null; + private File config = null; @Parameter(names = {"-auths", "--auths"}, converter = AuthConverter.class, description = "the authorizations to use when reading or writing") public Authorizations auths = Authorizations.EMPTY; public Connector getConnector() { try { - ZooKeeperInstance zki = new ZooKeeperInstance(config); + ZooKeeperInstance zki = new ZooKeeperInstance(getClientConfiguration()); return zki.getConnector(getPrincipal(), getToken()); } catch (AccumuloException | AccumuloSecurityException e) { throw new RuntimeException(e); @@ -113,14 +114,24 @@ public Connector getConnector() { } public ClientConfiguration getClientConfiguration() { -return new ClientConfiguration(config); +return ClientConfiguration.fromFile(config); } public String getPrincipal() { -return config.getString("accumulo.examples.principal", "root"); +String user = getClientConfiguration().getString("accumulo.examples.principal"); +if(user != null) + return user; + +return "root"; } public AuthenticationToken getToken() { -return new PasswordToken(config.getString("accumulo.examples.password", "secret")); +AuthenticationToken token = new PasswordToken("secret"); +String password = getClientConfiguration().getString("accumulo.examples.password"); +if(password != null){ + token = new PasswordToken(password); +} + +return token; } } diff --git a/src/main/java/org/apache/accumulo/examples/reservations/ARS.java b/src/main/java/org/apache/accumulo/examples/reservations/ARS.java index fb0277c..47dfe97 100644 --- a/src/main/java/org/apache/accumulo/examples/reservations/ARS.java +++ b/src/main/java/org/apache/accumulo/examples/reservations/ARS.java @@ -281,7 +281,7 @@ public void run() { } else if (tokens[0].equals("quit") && tokens.length == 1) { break; } else if
Accumulo-Pull-Requests - Build # 1046 - Still Failing
The Apache Jenkins build system has built Accumulo-Pull-Requests (build #1046) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Pull-Requests/1046/ to view the results.
[jira] [Comment Edited] (ACCUMULO-4787) Numerous leaked CLOSE_WAIT threads from TabletServer
[ https://issues.apache.org/jira/browse/ACCUMULO-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364744#comment-16364744 ] Adam J Shook edited comment on ACCUMULO-4787 at 2/14/18 8:38 PM: - Applied to {{1.8}}, cherry-picked into {{1.7}}, and merged into {{master}}. was (Author: adamjshook): Applied to {{1.8}, cherry-picked into {{1.7}}, and merged into {{master}}. > Numerous leaked CLOSE_WAIT threads from TabletServer > > > Key: ACCUMULO-4787 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4787 > Project: Accumulo > Issue Type: Bug >Affects Versions: 1.8.1 > Environment: * Ubuntu 14.04 > * HDFS 2.6.0 and 2.5.0 (in the middle of an upgrade cycle) > * ZooKeeper 3.4.6 > * Accumulo 1.8.1 > * HotSpot 1.8.0_121 >Reporter: Adam J Shook >Assignee: Adam J Shook >Priority: Minor > Labels: pull-request-available > Fix For: 1.7.4, 1.9.0, 2.0.0 > > Time Spent: 5.5h > Remaining Estimate: 0h > > I'm running into an issue across all environments where TabletServers are > occupying a large number of ports in a CLOSED_WAIT state writing to a > DataNode at port 50010. I'm seeing numbers from around 12,000 to 20,000 > ports. In some instances, there were over 68k and it was affecting other > applications from getting a free port and they would fail to start (which is > how we found this in the first place). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ACCUMULO-4787) Numerous leaked CLOSE_WAIT threads from TabletServer
[ https://issues.apache.org/jira/browse/ACCUMULO-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364744#comment-16364744 ] Adam J Shook commented on ACCUMULO-4787: Applied to {{1.8}, cherry-picked into {{1.7}}, and merged into {{master}}. > Numerous leaked CLOSE_WAIT threads from TabletServer > > > Key: ACCUMULO-4787 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4787 > Project: Accumulo > Issue Type: Bug >Affects Versions: 1.8.1 > Environment: * Ubuntu 14.04 > * HDFS 2.6.0 and 2.5.0 (in the middle of an upgrade cycle) > * ZooKeeper 3.4.6 > * Accumulo 1.8.1 > * HotSpot 1.8.0_121 >Reporter: Adam J Shook >Assignee: Adam J Shook >Priority: Minor > Labels: pull-request-available > Fix For: 1.7.4, 1.9.0, 2.0.0 > > Time Spent: 5h 20m > Remaining Estimate: 0h > > I'm running into an issue across all environments where TabletServers are > occupying a large number of ports in a CLOSED_WAIT state writing to a > DataNode at port 50010. I'm seeing numbers from around 12,000 to 20,000 > ports. In some instances, there were over 68k and it was affecting other > applications from getting a free port and they would fail to start (which is > how we found this in the first place). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Accumulo-Master - Build # 2259 - Still Failing
The Apache Jenkins build system has built Accumulo-Master (build #2259) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Master/2259/ to view the results.
[GitHub] keith-turner opened a new pull request #386: ACCUMULO-4799 removed redundant auth check
keith-turner opened a new pull request #386: ACCUMULO-4799 removed redundant auth check URL: https://github.com/apache/accumulo/pull/386 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (ACCUMULO-4799) In tablet server start scan authenticates twice
[ https://issues.apache.org/jira/browse/ACCUMULO-4799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated ACCUMULO-4799: - Labels: pull-request-available (was: ) > In tablet server start scan authenticates twice > --- > > Key: ACCUMULO-4799 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4799 > Project: Accumulo > Issue Type: Improvement >Affects Versions: 1.7.3, 1.8.1 >Reporter: Keith Turner >Priority: Major > Labels: pull-request-available > Fix For: 1.9.0, 2.0.0 > > > The code that handles a start scan RPC call checks authentication twice. > Each call to authenticate takes a bit of time. It would be nice if it only > did it once. > At [TabletServer line > 479|https://github.com/apache/accumulo/blob/rel/1.8.1/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java#L479] > a call to canScan is made which calls authenticate. Then at [TabletServer > line > 482|https://github.com/apache/accumulo/blob/rel/1.8.1/server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java#L482] > a call to check authorizations is made which also authenticates. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (ACCUMULO-4787) Numerous leaked CLOSE_WAIT threads from TabletServer
[ https://issues.apache.org/jira/browse/ACCUMULO-4787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Tubbs resolved ACCUMULO-4787. - Resolution: Fixed > Numerous leaked CLOSE_WAIT threads from TabletServer > > > Key: ACCUMULO-4787 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4787 > Project: Accumulo > Issue Type: Bug >Affects Versions: 1.8.1 > Environment: * Ubuntu 14.04 > * HDFS 2.6.0 and 2.5.0 (in the middle of an upgrade cycle) > * ZooKeeper 3.4.6 > * Accumulo 1.8.1 > * HotSpot 1.8.0_121 >Reporter: Adam J Shook >Assignee: Adam J Shook >Priority: Minor > Labels: pull-request-available > Fix For: 1.7.4, 1.9.0, 2.0.0 > > Time Spent: 5h 10m > Remaining Estimate: 0h > > I'm running into an issue across all environments where TabletServers are > occupying a large number of ports in a CLOSED_WAIT state writing to a > DataNode at port 50010. I'm seeing numbers from around 12,000 to 20,000 > ports. In some instances, there were over 68k and it was affecting other > applications from getting a free port and they would fail to start (which is > how we found this in the first place). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ctubbsii commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
ctubbsii commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369#issuecomment-365735451 Created follow-on issue for the potentially unclosed crypto stream from `DfsLogger.readHeaderAndReturnStream` https://issues.apache.org/jira/browse/ACCUMULO-4816 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (ACCUMULO-4816) DfsLogger.readHeaderAndReturnStream should return a Closeable
Christopher Tubbs created ACCUMULO-4816: --- Summary: DfsLogger.readHeaderAndReturnStream should return a Closeable Key: ACCUMULO-4816 URL: https://issues.apache.org/jira/browse/ACCUMULO-4816 Project: Accumulo Issue Type: Improvement Affects Versions: 2.0.0 Reporter: Christopher Tubbs With the change in https://github.com/apache/accumulo/pull/369, it reveals that there's a possible unclosed resource hidden in the object returned from DfsLogger.readHeaderAndReturnStream. This should return something which is a Closeable, so it can be used in a try-with-resources block and cleaned up properly just in case the extra stream allocates additional resources which need to be closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] keith-turner closed pull request #371: ACCUMULO-4792 Improve crypto configuration checks
keith-turner closed pull request #371: ACCUMULO-4792 Improve crypto configuration checks URL: https://github.com/apache/accumulo/pull/371 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java index f787d5e7f2..33453be436 100644 --- a/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java +++ b/core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java @@ -31,6 +31,9 @@ private static final Logger log = LoggerFactory.getLogger(ConfigSanityCheck.class); private static final String PREFIX = "BAD CONFIG "; + private static final String NULL_CIPHER = "NullCipher"; + private static final String NULL_CRYPTO_MODULE = "NullCryptoModule"; + private static final String NULL_SECRET_KEY_ENCRYPTION_STRATEGY = "NullSecretKeyEncryptionStrategy"; @SuppressWarnings("deprecation") private static final Property INSTANCE_DFS_URI = Property.INSTANCE_DFS_URI; @SuppressWarnings("deprecation") @@ -49,8 +52,10 @@ public static void validate(Iterable> entries) { String instanceZkTimeoutValue = null; boolean usingVolumes = false; -String cipherSuite = null; -String keyAlgorithm = null; +String cipherSuite = NULL_CIPHER; +String keyAlgorithm = NULL_CIPHER; +String secretKeyEncryptionStrategy = NULL_SECRET_KEY_ENCRYPTION_STRATEGY; +String cryptoModule = NULL_CRYPTO_MODULE; for (Entry entry : entries) { String key = entry.getKey(); String value = entry.getValue(); @@ -81,13 +86,20 @@ else if (!prop.getType().isValidFormat(value)) if (key.equals(Property.CRYPTO_CIPHER_SUITE.getKey())) { cipherSuite = Objects.requireNonNull(value); -Preconditions.checkArgument(cipherSuite.equals("NullCipher") || cipherSuite.split("/").length == 3, +Preconditions.checkArgument(cipherSuite.equals(NULL_CIPHER) || cipherSuite.split("/").length == 3, "Cipher suite must be NullCipher or in the form algorithm/mode/padding. Suite: " + cipherSuite + " is invalid."); } if (key.equals(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey())) { keyAlgorithm = Objects.requireNonNull(value); } + + if (key.equals(Property.CRYPTO_MODULE_CLASS.getKey())) { +cryptoModule = Objects.requireNonNull(value); + } + if (key.equals(Property.CRYPTO_SECRET_KEY_ENCRYPTION_STRATEGY_CLASS.getKey())) { +secretKeyEncryptionStrategy = Objects.requireNonNull(value); + } } if (instanceZkTimeoutValue != null) { @@ -98,12 +110,12 @@ else if (!prop.getType().isValidFormat(value)) log.warn("Use of {} and {} are deprecated. Consider using {} instead.", INSTANCE_DFS_URI, INSTANCE_DFS_DIR, Property.INSTANCE_VOLUMES); } -if (cipherSuite.equals("NullCipher") && !keyAlgorithm.equals("NullCipher")) { - fatal(Property.CRYPTO_CIPHER_SUITE.getKey() + " should be configured when " + Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey() + " is set."); +if ((cipherSuite.equals(NULL_CIPHER) || keyAlgorithm.equals(NULL_CIPHER)) && !cipherSuite.equals(keyAlgorithm)) { + fatal(Property.CRYPTO_CIPHER_SUITE.getKey() + " and " + Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME + " must both be configured."); } -if (!cipherSuite.equals("NullCipher") && keyAlgorithm.equals("NullCipher")) { - fatal(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey() + " should be configured when " + Property.CRYPTO_CIPHER_SUITE.getKey() + " is set."); +if (cryptoModule.equals(NULL_CRYPTO_MODULE) ^ secretKeyEncryptionStrategy.equals(NULL_SECRET_KEY_ENCRYPTION_STRATEGY)) { + fatal(Property.CRYPTO_MODULE_CLASS.getKey() + " and " + Property.CRYPTO_SECRET_KEY_ENCRYPTION_STRATEGY_CLASS.getKey() + " must both be configured."); } } diff --git a/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java b/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java index 9f2ff8efce..f359b4ef74 100644 --- a/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java +++ b/core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java @@ -28,8 +28,6 @@ @Before public void setUp() { m = new java.util.HashMap<>(); -m.put(Property.CRYPTO_CIPHER_SUITE.getKey(), "NullCipher"); -m.put(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey(), "NullCipher"); } @Test @@ -93,4 +91,32 @@ public void testFail_cipherSuiteNotSetKeyAlgorithmSet() { m.put(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey(), "AES"); ConfigSanityCheck.validate(m.entrySet());
[jira] [Updated] (ACCUMULO-4709) Add size sanity checks to Mutations
[ https://issues.apache.org/jira/browse/ACCUMULO-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Tubbs updated ACCUMULO-4709: Fix Version/s: 2.0.0 > Add size sanity checks to Mutations > --- > > Key: ACCUMULO-4709 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4709 > Project: Accumulo > Issue Type: Improvement >Reporter: Keith Turner >Assignee: Gergely Hajós >Priority: Major > Labels: newbie, pull-request-available > Fix For: 2.0.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > Based in ACCUMULO-4708, it may be good to add size sanity checks to > Accumulo's Mutation data type. The first step would be to determine how > Mutation handles the following situations currently. > * Create a mutation and put lots of small entries where total size exceeds > 2GB > * Create a mutation and add a single entry where the total of all fields > exceeds 2GB, but no individual field exceeds 2GB -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Accumulo-Pull-Requests - Build # 1045 - Still Failing
The Apache Jenkins build system has built Accumulo-Pull-Requests (build #1045) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Pull-Requests/1045/ to view the results.
Accumulo-Pull-Requests - Build # 1044 - Failure
The Apache Jenkins build system has built Accumulo-Pull-Requests (build #1044) Status: Failure Check console output at https://builds.apache.org/job/Accumulo-Pull-Requests/1044/ to view the results.
Accumulo-Master - Build # 2258 - Still Failing
The Apache Jenkins build system has built Accumulo-Master (build #2258) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Master/2258/ to view the results.
Accumulo-Pull-Requests - Build # 1043 - Still unstable
The Apache Jenkins build system has built Accumulo-Pull-Requests (build #1043) Status: Still unstable Check console output at https://builds.apache.org/job/Accumulo-Pull-Requests/1043/ to view the results.
[GitHub] keith-turner opened a new pull request #385: Accumulo 4788
keith-turner opened a new pull request #385: Accumulo 4788 URL: https://github.com/apache/accumulo/pull/385 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] keith-turner commented on a change in pull request #382: ACCUMULO-4782 switch session manager to a concurrent map
keith-turner commented on a change in pull request #382: ACCUMULO-4782 switch session manager to a concurrent map URL: https://github.com/apache/accumulo/pull/382#discussion_r168284682 ## File path: server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java ## @@ -70,20 +84,20 @@ public void run() { }; SimpleTimer.getInstance(conf).schedule(r, 0, Math.max(maxIdle / 2, 1000)); + } - public synchronized long createSession(Session session, boolean reserve) { + public long createSession(Session session, boolean reserve) { Review comment: I added a NEW state and checked it in this method This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
Accumulo-1.8 - Build # 260 - Still Failing
The Apache Jenkins build system has built Accumulo-1.8 (build #260) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-1.8/260/ to view the results.
[GitHub] adamjshook closed pull request #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
adamjshook closed pull request #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java index 395e18fba2..f2d36fd44c 100644 --- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java +++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/DfsLogger.java @@ -337,8 +337,7 @@ public DfsLogger(ServerResources conf, String filename, String meta) throws IOEx metaReference = meta; } - public static DFSLoggerInputStreams readHeaderAndReturnStream(VolumeManager fs, Path path, AccumuloConfiguration conf) throws IOException { -FSDataInputStream input = fs.open(path); + public static DFSLoggerInputStreams readHeaderAndReturnStream(FSDataInputStream input, AccumuloConfiguration conf) throws IOException { DataInputStream decryptingInput = null; byte[] magic = DfsLogger.LOG_FILE_HEADER_V3.getBytes(UTF_8); @@ -414,7 +413,7 @@ public static DFSLoggerInputStreams readHeaderAndReturnStream(VolumeManager fs, } } catch (EOFException e) { - log.warn("Got EOFException trying to read WAL header information, assuming the rest of the file (" + path + ") has no data."); + log.warn("Got EOFException trying to read WAL header information, assuming the rest of the file has no data."); // A TabletServer might have died before the (complete) header was written throw new LogHeaderIncompleteException(e); } diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java index 11097ceaea..ba5e488226 100644 --- a/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java +++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/log/LogSorter.java @@ -113,44 +113,46 @@ public void sort(String name, Path srcPath, String destPath) { // the following call does not throw an exception if the file/dir does not exist fs.deleteRecursively(new Path(destPath)); -DFSLoggerInputStreams inputStreams; -try { - inputStreams = DfsLogger.readHeaderAndReturnStream(fs, srcPath, conf); -} catch (LogHeaderIncompleteException e) { - log.warn("Could not read header from write-ahead log " + srcPath + ". Not sorting."); - // Creating a 'finished' marker will cause recovery to proceed normally and the - // empty file will be correctly ignored downstream. - fs.mkdirs(new Path(destPath)); - writeBuffer(destPath, Collections.> emptyList(), part++); - fs.create(SortedLogState.getFinishedMarkerPath(destPath)).close(); - return; -} +try (final FSDataInputStream fsinput = fs.open(srcPath)) { + DFSLoggerInputStreams inputStreams; + try { +inputStreams = DfsLogger.readHeaderAndReturnStream(fsinput, conf); + } catch (LogHeaderIncompleteException e) { +log.warn("Could not read header from write-ahead log " + srcPath + ". Not sorting."); +// Creating a 'finished' marker will cause recovery to proceed normally and the +// empty file will be correctly ignored downstream. +fs.mkdirs(new Path(destPath)); +writeBuffer(destPath, Collections. > emptyList(), part++); +fs.create(SortedLogState.getFinishedMarkerPath(destPath)).close(); +return; + } -this.input = inputStreams.getOriginalInput(); -this.decryptingInput = inputStreams.getDecryptingInputStream(); + this.input = inputStreams.getOriginalInput(); + this.decryptingInput = inputStreams.getDecryptingInputStream(); -final long bufferSize = conf.getMemoryInBytes(Property.TSERV_SORT_BUFFER_SIZE); -Thread.currentThread().setName("Sorting " + name + " for recovery"); -while (true) { - final ArrayList > buffer = new ArrayList<>(); - try { -long start = input.getPos(); -while (input.getPos() - start < bufferSize) { - LogFileKey key = new LogFileKey(); - LogFileValue value = new LogFileValue(); - key.readFields(decryptingInput); - value.readFields(decryptingInput); - buffer.add(new Pair<>(key, value)); + final long bufferSize =
[jira] [Resolved] (ACCUMULO-4814) Add links to Java classes in examples documentation
[ https://issues.apache.org/jira/browse/ACCUMULO-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Walch resolved ACCUMULO-4814. -- Resolution: Fixed > Add links to Java classes in examples documentation > --- > > Key: ACCUMULO-4814 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4814 > Project: Accumulo > Issue Type: Task > Components: examples >Reporter: Mike Walch >Assignee: Mike Walch >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] mikewalch commented on a change in pull request #382: ACCUMULO-4782 switch session manager to a concurrent map
mikewalch commented on a change in pull request #382: ACCUMULO-4782 switch session manager to a concurrent map URL: https://github.com/apache/accumulo/pull/382#discussion_r168276211 ## File path: server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java ## @@ -70,20 +84,20 @@ public void run() { }; SimpleTimer.getInstance(conf).schedule(r, 0, Math.max(maxIdle / 2, 1000)); + } - public synchronized long createSession(Session session, boolean reserve) { + public long createSession(Session session, boolean reserve) { Review comment: Might be nice to have an initial state for Sessions and check that it is in that state here. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] keith-turner commented on a change in pull request #383: ACCUMULO-4809 Avoid blocking during session cleanup
keith-turner commented on a change in pull request #383: ACCUMULO-4809 Avoid blocking during session cleanup URL: https://github.com/apache/accumulo/pull/383#discussion_r168271827 ## File path: server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java ## @@ -169,36 +169,36 @@ public Session removeSession(long sessionId, boolean unreserve) { } private void sweep(final long maxIdle, final long maxUpdateIdle) { -List sessionsToCleanup = new ArrayList<>(); -synchronized (this) { - Iterator iter = sessions.values().iterator(); - while (iter.hasNext()) { -Session session = iter.next(); -long configuredIdle = maxIdle; -if (session instanceof UpdateSession) { - configuredIdle = maxUpdateIdle; -} -long idleTime = System.currentTimeMillis() - session.lastAccessTime; -if (idleTime > configuredIdle && !session.reserved) { - log.info("Closing idle session from user=" + session.getUser() + ", client=" + session.client + ", idle=" + idleTime + "ms"); - iter.remove(); - sessionsToCleanup.add(session); +synchronized (idleSessions) { Review comment: > Is this to prevent multiple interleaving calls to sweep? Yeah, that is the purpose. I think in the current code that only one thread will ever call sweep at a time, but I added the sync just to be safe. I will add a comment about this. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (ACCUMULO-4814) Add links to Java classes in examples documentation
[ https://issues.apache.org/jira/browse/ACCUMULO-4814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated ACCUMULO-4814: - Labels: pull-request-available (was: ) > Add links to Java classes in examples documentation > --- > > Key: ACCUMULO-4814 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4814 > Project: Accumulo > Issue Type: Task > Components: examples >Reporter: Mike Walch >Assignee: Mike Walch >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] mikewalch closed pull request #9: ACCUMULO-4814 Added links to Java classes
mikewalch closed pull request #9: ACCUMULO-4814 Added links to Java classes URL: https://github.com/apache/accumulo-examples/pull/9 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/docs/batch.md b/docs/batch.md index c52f9f6..19acf84 100644 --- a/docs/batch.md +++ b/docs/batch.md @@ -16,32 +16,31 @@ limitations under the License. --> # Apache Accumulo Batch Writing and Scanning Example -This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.client: +This tutorial uses the following Java classes: - * SequentialBatchWriter.java - writes mutations with sequential rows and random values - * RandomBatchWriter.java - used by SequentialBatchWriter to generate random values - * RandomBatchScanner.java - reads random rows and verifies their values + * [SequentialBatchWriter.java] - writes mutations with sequential rows and random values + * [RandomBatchWriter.java] - used by SequentialBatchWriter to generate random values + * [RandomBatchScanner.java] - reads random rows and verifies their values -This is an example of how to use the batch writer and batch scanner. To compile -the example, run maven and copy the produced jar into the accumulo lib dir. -This is already done in the tar distribution. +This is an example of how to use the BatchWriter and BatchScanner. -Below are commands that add 1 entries to accumulo and then do 100 random -queries. The write command generates random 50 byte values. +First, you must ensure that the user you are running with (i.e `myuser` below) has the +`exampleVis` authorization. -Be sure to use the name of your instance (given as instance here) and the appropriate -list of zookeeper nodes (given as zookeepers here). +$ accumulo shell -u root -e "setauths -u myuser -s exampleVis" -Before you run this, you must ensure that the user you are running has the -"exampleVis" authorization. (you can set this in the shell with "setauths -u username -s exampleVis") +Second, you must create the table, batchtest1, ahead of time. -$ accumulo shell -u root -e "setauths -u username -s exampleVis" +$ accumulo shell -u root -e "createtable batchtest1" -You must also create the table, batchtest1, ahead of time. (In the shell, use "createtable batchtest1") +The command below adds 1 entries with random 50 bytes values to Accumulo. -$ accumulo shell -u username -e "createtable batchtest1" $ ./bin/runex client.SequentialBatchWriter -c ./examples.conf -t batchtest1 --start 0 --num 1 --size 50 --batchMemory 20M --batchLatency 500 --batchThreads 20 --vis exampleVis + +The command below will do 100 random queries. + $ ./bin/runex client.RandomBatchScanner -c ./examples.conf -t batchtest1 --num 100 --min 0 --max 1 --size 50 --scanThreads 20 --auths exampleVis + 07 11:33:11,103 [client.CountingVerifyingReceiver] INFO : Generating 100 random queries... 07 11:33:11,112 [client.CountingVerifyingReceiver] INFO : finished 07 11:33:11,260 [client.CountingVerifyingReceiver] INFO : 694.44 lookups/sec 0.14 secs @@ -53,3 +52,7 @@ You must also create the table, batchtest1, ahead of time. (In the shell, use "c 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : 2173.91 lookups/sec 0.05 secs 07 11:33:11,416 [client.CountingVerifyingReceiver] INFO : num results : 100 + +[SequentialBatchWriter.java]: ../src/main/java/org/apache/accumulo/examples/client/SequentialBatchWriter.java +[RandomBatchWriter.java]: ../src/main/java/org/apache/accumulo/examples/client/RandomBatchWriter.java +[RandomBatchScanner.java]: ../src/main/java/org/apache/accumulo/examples/client/RandomBatchScanner.java diff --git a/docs/client.md b/docs/client.md index b3872a8..af02efe 100644 --- a/docs/client.md +++ b/docs/client.md @@ -16,22 +16,17 @@ limitations under the License. --> # Apache Accumulo Client Examples -This documents how you run the simplest java examples. +The following Java classes are examples of the Accumulo client API: -This tutorial uses the following Java classes, which can be found in org.apache.accumulo.examples.client: + * [Flush.java] - flushes a table + * [RowOperations.java] - reads and writes rows + * [ReadWriteExample.java] - creates a table, writes to it, and reads from it - * Flush.java - flushes a table - * RowOperations.java - reads and writes rows - * ReadWriteExample.java - creates a table, writes to it, and reads from it - -Using the accumulo command, you can run the simple client examples by providing their -class name, and enough arguments to find your accumulo instance. For example, -the Flush class will flush a table: +The Flush class will flush a table: $ ./bin/runex client.Flush -c ./examples.conf -t
[jira] [Commented] (ACCUMULO-4806) Allow offline bulk imports
[ https://issues.apache.org/jira/browse/ACCUMULO-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364493#comment-16364493 ] Keith Turner commented on ACCUMULO-4806: For the possible workflow I mentioned I was thinking the offline bulk import could use the mapping file mentioned in ACCUMULO-4813. I think this could make that entire sequence of operations very fast. > Allow offline bulk imports > -- > > Key: ACCUMULO-4806 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4806 > Project: Accumulo > Issue Type: Sub-task > Components: master, tserver >Reporter: Mark Owens >Assignee: Michael Miller >Priority: Major > Fix For: 2.0.0 > > > Allowing offline bulk imports would be useful for some customers. Currently > these customers already take tables offline to set split points but then have > to bring them back online before starting the import. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ACCUMULO-4806) Allow offline bulk imports
[ https://issues.apache.org/jira/browse/ACCUMULO-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364490#comment-16364490 ] Michael Miller commented on ACCUMULO-4806: -- Also, would it save time even if the 4 steps Keith mentioned above = total time current bulk import? Say if you can complete Steps 1 and 2 ahead of time, before data arrives? > Allow offline bulk imports > -- > > Key: ACCUMULO-4806 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4806 > Project: Accumulo > Issue Type: Sub-task > Components: master, tserver >Reporter: Mark Owens >Assignee: Michael Miller >Priority: Major > Fix For: 2.0.0 > > > Allowing offline bulk imports would be useful for some customers. Currently > these customers already take tables offline to set split points but then have > to bring them back online before starting the import. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (ACCUMULO-4806) Allow offline bulk imports
[ https://issues.apache.org/jira/browse/ACCUMULO-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364471#comment-16364471 ] Keith Turner edited comment on ACCUMULO-4806 at 2/14/18 5:31 PM: - [~etcoleman] if create table supported creating an offline table, would the following work flow be useful? * Create offline table * Add splits to offline table * Bulk import to offline table * Bring table online was (Author: kturner): [~etcoleman] if create table supported creating and offline table, would the following work flow be useful? * Create offline table * Add splits to offline table * Bulk import to offline table * Bring table online > Allow offline bulk imports > -- > > Key: ACCUMULO-4806 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4806 > Project: Accumulo > Issue Type: Sub-task > Components: master, tserver >Reporter: Mark Owens >Assignee: Michael Miller >Priority: Major > Fix For: 2.0.0 > > > Allowing offline bulk imports would be useful for some customers. Currently > these customers already take tables offline to set split points but then have > to bring them back online before starting the import. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ACCUMULO-4808) Add splits to table at table creation.
[ https://issues.apache.org/jira/browse/ACCUMULO-4808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364472#comment-16364472 ] Keith Turner commented on ACCUMULO-4808: Another possible way to offer this functionality is to allow creating tables in offline mode and allow splitting offline tables. > Add splits to table at table creation. > -- > > Key: ACCUMULO-4808 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4808 > Project: Accumulo > Issue Type: New Feature > Components: master, tserver >Reporter: Mark Owens >Assignee: Mark Owens >Priority: Major > Fix For: 2.0.0 > > > Add capability to add table splits at table creation. Recent changes now > allow iterator and locality groups to be created at table creation. Do the > same with splits. Comment below from > [ACCUMULO-4806|https://issues.apache.org/jira/browse/ACCUMULO-4806] explains > the motivation for the request: > {quote}[~etcoleman] added a comment - 2 hours ago > It would go al long way if the splits could be added at table creation or > when table is offline. When the other API changes were made by Mark, I > wondered if this task could also could be done at that time - but I believe > that it was more complicated. > The delay is that when a table is created and then the splits added and then > taken offline there is a period proportional to the number of splits as they > are off-loaded from the tserver where they originally got assigned. (The > re-online with splits distributed across the cluster is quite fast) > If the splits could be added at table creation, or while the table is offline > so that the delay for shedding the tablets could be avoided, then the need to > perform the actual import offline would not be as necessary. > > {quote} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (ACCUMULO-4806) Allow offline bulk imports
[ https://issues.apache.org/jira/browse/ACCUMULO-4806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364471#comment-16364471 ] Keith Turner commented on ACCUMULO-4806: [~etcoleman] if create table supported creating and offline table, would the following work flow be useful? * Create offline table * Add splits to offline table * Bulk import to offline table * Bring table online > Allow offline bulk imports > -- > > Key: ACCUMULO-4806 > URL: https://issues.apache.org/jira/browse/ACCUMULO-4806 > Project: Accumulo > Issue Type: Sub-task > Components: master, tserver >Reporter: Mark Owens >Assignee: Michael Miller >Priority: Major > Fix For: 2.0.0 > > > Allowing offline bulk imports would be useful for some customers. Currently > these customers already take tables offline to set split points but then have > to bring them back online before starting the import. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Accumulo-Pull-Requests - Build # 1042 - Still Failing
The Apache Jenkins build system has built Accumulo-Pull-Requests (build #1042) Status: Still Failing Check console output at https://builds.apache.org/job/Accumulo-Pull-Requests/1042/ to view the results.
[GitHub] PircDef commented on a change in pull request #371: ACCUMULO-4792 Improve crypto configuration checks
PircDef commented on a change in pull request #371: ACCUMULO-4792 Improve crypto configuration checks URL: https://github.com/apache/accumulo/pull/371#discussion_r168236489 ## File path: core/src/main/java/org/apache/accumulo/core/conf/ConfigSanityCheck.java ## @@ -98,13 +110,17 @@ else if (!prop.getType().isValidFormat(value)) log.warn("Use of {} and {} are deprecated. Consider using {} instead.", INSTANCE_DFS_URI, INSTANCE_DFS_DIR, Property.INSTANCE_VOLUMES); } -if (cipherSuite.equals("NullCipher") && !keyAlgorithm.equals("NullCipher")) { - fatal(Property.CRYPTO_CIPHER_SUITE.getKey() + " should be configured when " + Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey() + " is set."); +if ((cipherSuite.equals(NULL_CIPHER) || keyAlgorithm.equals(NULL_CIPHER)) && !cipherSuite.equals(keyAlgorithm)) { + fatal(Property.CRYPTO_CIPHER_SUITE.getKey() + " and " + Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME + " must both be configured."); } -if (!cipherSuite.equals("NullCipher") && keyAlgorithm.equals("NullCipher")) { - fatal(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey() + " should be configured when " + Property.CRYPTO_CIPHER_SUITE.getKey() + " is set."); +if (cryptoModule.equals(NULL_CRYPTO_MODULE) && !secretKeyEncryptionStrategy.equals(NULL_SECRET_KEY_ENCRYPTION_STRATEGY)) { Review comment: There's value in consistent style. Lasted push updated this, too This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] PircDef commented on a change in pull request #371: ACCUMULO-4792 Improve crypto configuration checks
PircDef commented on a change in pull request #371: ACCUMULO-4792 Improve crypto configuration checks URL: https://github.com/apache/accumulo/pull/371#discussion_r168236321 ## File path: core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java ## @@ -93,4 +91,18 @@ public void testFail_cipherSuiteNotSetKeyAlgorithmSet() { m.put(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey(), "AES"); ConfigSanityCheck.validate(m.entrySet()); } + Review comment: Agreed. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] mikewalch commented on a change in pull request #383: ACCUMULO-4809 Avoid blocking during session cleanup
mikewalch commented on a change in pull request #383: ACCUMULO-4809 Avoid blocking during session cleanup URL: https://github.com/apache/accumulo/pull/383#discussion_r168228686 ## File path: server/tserver/src/main/java/org/apache/accumulo/tserver/session/SessionManager.java ## @@ -169,36 +169,36 @@ public Session removeSession(long sessionId, boolean unreserve) { } private void sweep(final long maxIdle, final long maxUpdateIdle) { -List sessionsToCleanup = new ArrayList<>(); -synchronized (this) { - Iterator iter = sessions.values().iterator(); - while (iter.hasNext()) { -Session session = iter.next(); -long configuredIdle = maxIdle; -if (session instanceof UpdateSession) { - configuredIdle = maxUpdateIdle; -} -long idleTime = System.currentTimeMillis() - session.lastAccessTime; -if (idleTime > configuredIdle && !session.reserved) { - log.info("Closing idle session from user=" + session.getUser() + ", client=" + session.client + ", idle=" + idleTime + "ms"); - iter.remove(); - sessionsToCleanup.add(session); +synchronized (idleSessions) { Review comment: why do you have a synchronized block on `idleSessions` with the other blocks removed? Is this to prevent multiple interleaving calls to sweep? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] joshelser commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
joshelser commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369#issuecomment-365653327 Try following https://gitbox.apache.org/setup/ @adamjshook This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369#issuecomment-365651034 @joshelser Thanks! Looks like I don't have write access here on Github. Is that a mistake or should I apply it via `gitbox.apache.org`. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369#issuecomment-365651034 @joshelser Thanks! Looks like I don't have write access here on Github. Is that a mistake or should I apply it via `gitbox.apache.org`? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] joshelser commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
joshelser commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369#issuecomment-365649905 > Is there a particular process for merging in fixes and applying them to other release lines? No hard+fast rules. General recommendation is to apply it to the oldest branch and then merge it through newer release lines. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (ACCUMULO-4815) AccumuloReplicaSystem refactor
Adam J Shook created ACCUMULO-4815: -- Summary: AccumuloReplicaSystem refactor Key: ACCUMULO-4815 URL: https://issues.apache.org/jira/browse/ACCUMULO-4815 Project: Accumulo Issue Type: Improvement Affects Versions: 1.8.1, 1.7.3, 1.9.0 Reporter: Adam J Shook The {{AccumuloReplicaSystem}} could use some refactoring to improve readability. See [PR #369|https://github.com/apache/accumulo/pull/369] for some discussion related to the cleanup. Includes but is not limited to refactoring out some methods, cleaning up logs/errors, and making the DFS Logger {{Closeable}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369#issuecomment-365649778 Created [ACCUMULO-4815](https://issues.apache.org/jira/browse/ACCUMULO-4815) to track some cleanup efforts. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem
adamjshook commented on issue #369: [ACCUMULO-4787] Close input stream in AccumuloReplicaSystem URL: https://github.com/apache/accumulo/pull/369#issuecomment-365648691 Is there a particular process for merging in fixes and applying them to other release lines? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] keith-turner commented on a change in pull request #371: ACCUMULO-4792 Improve crypto configuration checks
keith-turner commented on a change in pull request #371: ACCUMULO-4792 Improve crypto configuration checks URL: https://github.com/apache/accumulo/pull/371#discussion_r168202225 ## File path: core/src/test/java/org/apache/accumulo/core/conf/ConfigSanityCheckTest.java ## @@ -93,4 +91,18 @@ public void testFail_cipherSuiteNotSetKeyAlgorithmSet() { m.put(Property.CRYPTO_CIPHER_KEY_ALGORITHM_NAME.getKey(), "AES"); ConfigSanityCheck.validate(m.entrySet()); } + Review comment: It would be nice to add the combos that are expected to pass to the `testPass()` function. For example when CRYPTO_MODULE_CLASS and CRYPTO_SECRET_KEY_ENCRYPTION_STRATEGY_CLASS are both null AND when both are not null. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services