[jira] [Commented] (HBASE-12565) Race condition in HRegion.batchMutate() causes partial data to be written when region closes
[ https://issues.apache.org/jira/browse/HBASE-12565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14224059#comment-14224059 ] John Leach commented on HBASE-12565: That is correct, it does not have to commit all the data or not. It should however accurately relay which rows were written and which ones were not with status codes similar to its 0.94 counterpart. It just looks like it needs a bit of attention with regards to error semantics. Race condition in HRegion.batchMutate() causes partial data to be written when region closes - Key: HBASE-12565 URL: https://issues.apache.org/jira/browse/HBASE-12565 Project: HBase Issue Type: Bug Components: Performance, regionserver Affects Versions: 0.98.6 Reporter: Scott Fines The following sequence of events is possible to occur in HRegion's batchMutate() call: 1. caller attempts to call HRegion.batchMutate() with a batch of N1 records 2. batchMutate acquires region lock in startRegionOperation, then calls doMiniBatchMutation() 3. doMiniBatchMutation acquires one row lock 4. Region closes 5. doMiniBatchMutation attempts to acquire second row lock. When this happens, the lock acquisition will also attempt to acquire the region lock, which fails (because the region is closing). At this stage, doMiniBatchMutation will stop writing further, BUT it WILL write data for the rows whose locks have already been acquired, and advance the index in MiniBatchOperationInProgress. Then, after it terminates successfully, batchMutate() will loop around a second time, and attempt AGAIN to acquire the region closing lock. When that happens, a NotServingRegionException is thrown back to the caller. Thus, we have a race condition where partial data can be written when a region server is closing. The main problem stems from the location of startRegionOperation() calls in batchMutate and doMiniBatchMutation(): 1. batchMutate() reacquires the region lock with each iteration of the loop, which can cause some successful writes to occur, but then fail on others 2. getRowLock() attempts to acquire the region lock once for each row, which allows doMiniBatchMutation to terminate early; this forces batchMutate() to use multiple iterations and results in condition 1 being hit. There appears to be two parts to the solution as well: 1. open an internal path so that doMiniBatchMutation() can acquire row locks without checking for region closure. This will have the added benefit of a significant performance improvement during large batch mutations. 2. move the startRegionOperation() out of the loop in batchMutate() so that multiple iterations of doMiniBatchMutation will not cause the operation to fail. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12720) Make InternalScan LimitedPrivate
[ https://issues.apache.org/jira/browse/HBASE-12720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14253928#comment-14253928 ] John Leach commented on HBASE-12720: No, just 0.98 will work for us... Make InternalScan LimitedPrivate Key: HBASE-12720 URL: https://issues.apache.org/jira/browse/HBASE-12720 Project: HBase Issue Type: Improvement Affects Versions: 0.94.25, 0.98.9 Reporter: Vladimir Rodionov Assignee: Vladimir Rodionov Fix For: 1.0.0, 2.0.0, 0.98.10, 0.94.27 Attachments: HBase-12720-0.94.patch, HBase-12720.patch This is the request from sophisticated users :) Rationale: We would like the internal scan to be made available so we can see what is just in the MemStore (or in store files only). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12912) StoreScanner calls Configuration for Boolean Check on each initialization
[ https://issues.apache.org/jira/browse/HBASE-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14290064#comment-14290064 ] John Leach commented on HBASE-12912: Assign to me in the near term, and I will have someone fix. StoreScanner calls Configuration for Boolean Check on each initialization - Key: HBASE-12912 URL: https://issues.apache.org/jira/browse/HBASE-12912 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: Andrew Purtell Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0 Attachments: StoreScannerStall.tiff Original Estimate: 1h Remaining Estimate: 1h There is a clear CPU drain and iterator creation when creating store scanners under high load. Splice was running a TPCC test of our database and we are seeing object creation and CPU waste on the boolean check Code Snippet... if (store != null ((HStore)store).getHRegion() != null store.getStorefilesCount() 1) { RegionServerServices rsService = ((HStore)store).getHRegion().getRegionServerServices(); if (rsService == null || !rsService.getConfiguration().getBoolean( STORESCANNER_PARALLEL_SEEK_ENABLE, false)) return; isParallelSeekEnabled = true; executor = rsService.getExecutorService(); } Will attach profile... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-12912) StoreScanner calls Configuration for Boolean Check on each initialization
John Leach created HBASE-12912: -- Summary: StoreScanner calls Configuration for Boolean Check on each initialization Key: HBASE-12912 URL: https://issues.apache.org/jira/browse/HBASE-12912 Project: HBase Issue Type: Bug Reporter: John Leach Attachments: StoreScannerStall.tiff There is a clear CPU drain and iterator creation when creating store scanners under high load. Splice was running a TPCC test of our database and we are seeing object creation and CPU waste on the boolean check Code Snippet... if (store != null ((HStore)store).getHRegion() != null store.getStorefilesCount() 1) { RegionServerServices rsService = ((HStore)store).getHRegion().getRegionServerServices(); if (rsService == null || !rsService.getConfiguration().getBoolean( STORESCANNER_PARALLEL_SEEK_ENABLE, false)) return; isParallelSeekEnabled = true; executor = rsService.getExecutorService(); } Will attach profile... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12912) StoreScanner calls Configuration for Boolean Check on each initialization
[ https://issues.apache.org/jira/browse/HBASE-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-12912: --- Attachment: StoreScannerStall.tiff A picture of the CPU effect of the boolean check... StoreScanner calls Configuration for Boolean Check on each initialization - Key: HBASE-12912 URL: https://issues.apache.org/jira/browse/HBASE-12912 Project: HBase Issue Type: Bug Reporter: John Leach Attachments: StoreScannerStall.tiff Original Estimate: 1h Remaining Estimate: 1h There is a clear CPU drain and iterator creation when creating store scanners under high load. Splice was running a TPCC test of our database and we are seeing object creation and CPU waste on the boolean check Code Snippet... if (store != null ((HStore)store).getHRegion() != null store.getStorefilesCount() 1) { RegionServerServices rsService = ((HStore)store).getHRegion().getRegionServerServices(); if (rsService == null || !rsService.getConfiguration().getBoolean( STORESCANNER_PARALLEL_SEEK_ENABLE, false)) return; isParallelSeekEnabled = true; executor = rsService.getExecutorService(); } Will attach profile... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13379) TimeRangeTracker Can Be Non-Blocking
[ https://issues.apache.org/jira/browse/HBASE-13379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14390806#comment-14390806 ] John Leach commented on HBASE-13379: I will take a look, thanks for the heads up... TimeRangeTracker Can Be Non-Blocking Key: HBASE-13379 URL: https://issues.apache.org/jira/browse/HBASE-13379 Project: HBase Issue Type: New Feature Reporter: John Leach Priority: Minor Original Estimate: 2h Remaining Estimate: 2h I am seeing the TimeRangeTracker hotspot under heavy write load. It looks like a good use case for an atomic reference for the data point (min and max timestamp). I have a working proto, will submit patch for consideration once I run this test suite (beast). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13379) TimeRangeTracker Can Be Non-Blocking
[ https://issues.apache.org/jira/browse/HBASE-13379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14390905#comment-14390905 ] John Leach commented on HBASE-13379: yeah please close, will rename my commit to that JIRA... TimeRangeTracker Can Be Non-Blocking Key: HBASE-13379 URL: https://issues.apache.org/jira/browse/HBASE-13379 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Original Estimate: 2h Remaining Estimate: 2h I am seeing the TimeRangeTracker hotspot under heavy write load. It looks like a good use case for an atomic reference for the data point (min and max timestamp). I have a working proto, will submit patch for consideration once I run this test suite (beast). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
John Leach created HBASE-13378: -- Summary: RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Priority: Minor This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-13379) TimeRangeTracker Can Be Non-Blocking
[ https://issues.apache.org/jira/browse/HBASE-13379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-13379: -- Assignee: John Leach TimeRangeTracker Can Be Non-Blocking Key: HBASE-13379 URL: https://issues.apache.org/jira/browse/HBASE-13379 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Original Estimate: 2h Remaining Estimate: 2h I am seeing the TimeRangeTracker hotspot under heavy write load. It looks like a good use case for an atomic reference for the data point (min and max timestamp). I have a working proto, will submit patch for consideration once I run this test suite (beast). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13379) TimeRangeTracker Can Be Non-Blocking
John Leach created HBASE-13379: -- Summary: TimeRangeTracker Can Be Non-Blocking Key: HBASE-13379 URL: https://issues.apache.org/jira/browse/HBASE-13379 Project: HBase Issue Type: New Feature Reporter: John Leach Priority: Minor I am seeing the TimeRangeTracker hotspot under heavy write load. It looks like a good use case for an atomic reference for the data point (min and max timestamp). I have a working proto, will submit patch for consideration once I run this test suite (beast). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-12148: -- Assignee: John Leach (was: stack) Remove TimeRangeTracker as point of contention when many threads writing a Store Key: HBASE-12148 URL: https://issues.apache.org/jira/browse/HBASE-12148 Project: HBase Issue Type: Sub-task Components: Performance Affects Versions: 2.0.0, 0.99.1 Reporter: stack Assignee: John Leach Fix For: 2.0.0, 1.1.0, 0.98.13 Attachments: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14529743#comment-14529743 ] John Leach commented on HBASE-13420: Andrew, Sorry for the delay, day job is killing me. Our workload hammers the metrics collection since it is called on really low level items (startRegionOperation and stopRegionOperation)... My vote is to remove this metric because it is really hard to understand what it is measuring... Oops, I do not have a vote. Regards, John Leach RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: Andrew Purtell Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 Attachments: 1M-0.98.12.svg, 1M-0.98.13-SNAPSHOT.svg, HBASE-13420.patch, HBASE-13420.txt, hbase-13420.tar.gz, offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13691) HTable and RPC Code Accessing Configuration each time (Blocking)
[ https://issues.apache.org/jira/browse/HBASE-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13691: --- Attachment: Properties_getProperty.tiff HTable and RPC Code Accessing Configuration each time (Blocking) Key: HBASE-13691 URL: https://issues.apache.org/jira/browse/HBASE-13691 Project: HBase Issue Type: Improvement Reporter: John Leach Attachments: Properties_getProperty.tiff Properties.getProperty blocks under load... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13691) HTable and RPC Code Accessing Configuration each time (Blocking)
John Leach created HBASE-13691: -- Summary: HTable and RPC Code Accessing Configuration each time (Blocking) Key: HBASE-13691 URL: https://issues.apache.org/jira/browse/HBASE-13691 Project: HBase Issue Type: Improvement Reporter: John Leach Properties.getProperty blocks under load... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13690) Client Scanner Initialization Reformats strings every time
John Leach created HBASE-13690: -- Summary: Client Scanner Initialization Reformats strings every time Key: HBASE-13690 URL: https://issues.apache.org/jira/browse/HBASE-13690 Project: HBase Issue Type: Improvement Reporter: John Leach Priority: Critical The client scanner continually goes back into the conf for values... public ClientScanner(final Configuration conf, final Scan scan, final TableName tableName, HConnection connection, RpcRetryingCallerFactory rpcFactory, RpcControllerFactory controllerFactory) throws IOException { if (LOG.isTraceEnabled()) { LOG.trace(Scan table= + tableName + , startRow= + Bytes.toStringBinary(scan.getStartRow())); } this.scan = scan; this.tableName = tableName; this.lastNext = System.currentTimeMillis(); this.connection = connection; if (scan.getMaxResultSize() 0) { this.maxScannerResultSize = scan.getMaxResultSize(); } else { this.maxScannerResultSize = conf.getLong( HConstants.HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE_KEY, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE); } this.scannerTimeout = HBaseConfiguration.getInt(conf, HConstants.HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD, HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD); // check if application wants to collect scan metrics initScanMetrics(scan); // Use the caching from the Scan. If not set, use the default cache setting for this table. if (this.scan.getCaching() 0) { this.caching = this.scan.getCaching(); } else { this.caching = conf.getInt( HConstants.HBASE_CLIENT_SCANNER_CACHING, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING); } this.caller = rpcFactory.Result[] newCaller(); this.rpcControllerFactory = controllerFactory; initializeScannerInConstruction(); } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13690) Client Scanner Initialization Reformats strings every time
[ https://issues.apache.org/jira/browse/HBASE-13690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13690: --- Attachment: ClientScanner_String_Format.tiff Client Scanner Initialization Reformats strings every time -- Key: HBASE-13690 URL: https://issues.apache.org/jira/browse/HBASE-13690 Project: HBase Issue Type: Improvement Reporter: John Leach Priority: Critical Attachments: ClientScanner_String_Format.tiff The client scanner continually goes back into the conf for values... public ClientScanner(final Configuration conf, final Scan scan, final TableName tableName, HConnection connection, RpcRetryingCallerFactory rpcFactory, RpcControllerFactory controllerFactory) throws IOException { if (LOG.isTraceEnabled()) { LOG.trace(Scan table= + tableName + , startRow= + Bytes.toStringBinary(scan.getStartRow())); } this.scan = scan; this.tableName = tableName; this.lastNext = System.currentTimeMillis(); this.connection = connection; if (scan.getMaxResultSize() 0) { this.maxScannerResultSize = scan.getMaxResultSize(); } else { this.maxScannerResultSize = conf.getLong( HConstants.HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE_KEY, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE); } this.scannerTimeout = HBaseConfiguration.getInt(conf, HConstants.HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD, HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD); // check if application wants to collect scan metrics initScanMetrics(scan); // Use the caching from the Scan. If not set, use the default cache setting for this table. if (this.scan.getCaching() 0) { this.caching = this.scan.getCaching(); } else { this.caching = conf.getInt( HConstants.HBASE_CLIENT_SCANNER_CACHING, HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING); } this.caller = rpcFactory.Result[] newCaller(); this.rpcControllerFactory = controllerFactory; initializeScannerInConstruction(); } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13420: --- Attachment: offerExecutionLatency.tiff Document showing blocked threads during RegionEnvironment#offerExecutionLatency RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Attachments: offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
John Leach created HBASE-13420: -- Summary: RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13420: --- Status: Patch Available (was: In Progress) Patch Submitted to simply remove the calculation. RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: John Leach Attachments: HBASE-13420.txt, offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13420: --- Attachment: HBASE-13420.txt Simple remove of metric capture for coprocessors. RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: John Leach Attachments: HBASE-13420.txt, offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-13420 started by John Leach. -- RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: John Leach Attachments: HBASE-13420.txt, offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-13420: -- Assignee: John Leach RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: John Leach Attachments: offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14484519#comment-14484519 ] John Leach commented on HBASE-13420: I think this metric is way to broad to be coherent. Is it the latency on a postRegionOperation call or a prePut on the observer? The definition of the metric would be: The first N (100) latencies from any possible coprocessor call for a specific Region Observer refreshed every 45 seconds. Still working on a clever Acronym... Would it make sense to build an actual bean for each of the observers that actually reports real metrics and is registered in jmx following the signature of the observer? We clearly need a short term fix, but I am concerned we are continuing a metric that really serves no purpose. What purpose does this metric serve? RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: Andrew Purtell Attachments: HBASE-13420.patch, HBASE-13420.txt, offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
John Leach created HBASE-13427: -- Summary: HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-13427: -- Assignee: John Leach HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: SetFromMap.add_Runnable.tiff Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Attachment: SetFromMap.add_Runnable.tiff Graph of addChangedReaderObserver HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Attachments: SetFromMap.add_Runnable.tiff Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Remaining Estimate: 2h Original Estimate: 2h HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485539#comment-14485539 ] John Leach commented on HBASE-13420: Andrew, Will do for the short term. Clearly, if we want to display metrics for coprocessors they should display all coprocs and have a scalable meter impl for them we clear understanding of what execution time means. John RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: Andrew Purtell Attachments: HBASE-13420.patch, HBASE-13420.txt, offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-13427 started by John Leach. -- HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Attachment: HBASE-13427.patch Attached patch HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Status: Patch Available (was: In Progress) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Status: Open (was: Patch Available) Cancel patch, adding curly... HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Status: Patch Available (was: In Progress) New Patch with curly braces HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Attachment: HBASE-13427_CURLY_BRACES.patch Adding Curly Braces HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-13427 started by John Leach. -- HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14485902#comment-14485902 ] John Leach commented on HBASE-13427: nits make the world go around... Love em... HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13378: --- Status: Patch Available (was: In Progress) Patch Submitted RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.txt Original Estimate: 2h Remaining Estimate: 2h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-13378: -- Assignee: John Leach RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.txt Original Estimate: 2h Remaining Estimate: 2h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13378: --- Attachment: HBASE-13378.txt Added patch HBASE-13378.txt that removes scans with IsolationLevel.READ_UNCOMMITED from synchronization block and from placement and removal from the readPoints ConcurrentHashMap. Hotspot is removed from our profiling. RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Priority: Minor Attachments: HBASE-13378.txt Original Estimate: 2h Remaining Estimate: 2h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-13378 started by John Leach. -- RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.txt Original Estimate: 2h Remaining Estimate: 2h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392738#comment-14392738 ] John Leach commented on HBASE-12148: Oops. I kept the synchronization in the call methods. I will fix and submit another patch. Remove TimeRangeTracker as point of contention when many threads writing a Store Key: HBASE-12148 URL: https://issues.apache.org/jira/browse/HBASE-12148 Project: HBase Issue Type: Sub-task Components: Performance Affects Versions: 2.0.0, 0.99.1 Reporter: stack Assignee: John Leach Fix For: 2.0.0, 1.1.0, 0.98.13 Attachments: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, HBASE-12148.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-12148: --- Attachment: HBASE-12148V2.txt Adding another patch that removes the synchronization on the get calls. Remove TimeRangeTracker as point of contention when many threads writing a Store Key: HBASE-12148 URL: https://issues.apache.org/jira/browse/HBASE-12148 Project: HBase Issue Type: Sub-task Components: Performance Affects Versions: 2.0.0, 0.99.1 Reporter: stack Assignee: John Leach Fix For: 2.0.0, 1.1.0, 0.98.13 Attachments: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, HBASE-12148.txt, HBASE-12148V2.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-12148: --- Status: Patch Available (was: Reopened) Patch Submittted for consideration... Remove TimeRangeTracker as point of contention when many threads writing a Store Key: HBASE-12148 URL: https://issues.apache.org/jira/browse/HBASE-12148 Project: HBase Issue Type: Sub-task Components: Performance Affects Versions: 0.99.1, 2.0.0 Reporter: stack Assignee: John Leach Fix For: 2.0.0, 1.1.0, 0.98.13 Attachments: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, HBASE-12148.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-12148: --- Attachment: HBASE-12148.txt Adding a non-blocking TimeRangeTracker that takes advantage of an AtomicReference for concurrency. Remove TimeRangeTracker as point of contention when many threads writing a Store Key: HBASE-12148 URL: https://issues.apache.org/jira/browse/HBASE-12148 Project: HBase Issue Type: Sub-task Components: Performance Affects Versions: 2.0.0, 0.99.1 Reporter: stack Assignee: John Leach Fix For: 2.0.0, 1.1.0, 0.98.13 Attachments: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, HBASE-12148.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14391077#comment-14391077 ] John Leach commented on HBASE-12148: HBase-12148.txt added for your consideration. Remove TimeRangeTracker as point of contention when many threads writing a Store Key: HBASE-12148 URL: https://issues.apache.org/jira/browse/HBASE-12148 Project: HBase Issue Type: Sub-task Components: Performance Affects Versions: 2.0.0, 0.99.1 Reporter: stack Assignee: John Leach Fix For: 2.0.0, 1.1.0, 0.98.13 Attachments: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, HBASE-12148.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work logged] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?focusedWorklogId=19563page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-19563 ] John Leach logged work on HBASE-13378: -- Author: John Leach Created on: 01/Apr/15 21:04 Start Date: 01/Apr/15 21:03 Worklog Time Spent: 2h Issue Time Tracking --- Worklog Id: (was: 19563) Time Spent: 2h Remaining Estimate: 0h (was: 2h) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.txt Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Status: Patch Available (was: Open) Patch Submitted HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, HBASE-13427_V3.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Attachment: HBASE-13427_V3.patch Attaching file with checkstyle error fixes. HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, HBASE-13427_V3.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work logged] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?focusedWorklogId=21104page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-21104 ] John Leach logged work on HBASE-13427: -- Author: John Leach Created on: 09/Apr/15 15:08 Start Date: 09/Apr/15 15:08 Worklog Time Spent: 2h Issue Time Tracking --- Worklog Id: (was: 21104) Time Spent: 2h Remaining Estimate: 0h (was: 2h) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, HBASE-13427_V3.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14486147#comment-14486147 ] John Leach commented on HBASE-12148: I can get this intermittently (1 out of 5) but I am still struggling a bit... It is the TestDistributedLogSplitting class that seems to show a race condition... Digging in some more... Remove TimeRangeTracker as point of contention when many threads writing a Store Key: HBASE-12148 URL: https://issues.apache.org/jira/browse/HBASE-12148 Project: HBase Issue Type: Sub-task Components: Performance Affects Versions: 2.0.0, 0.99.1 Reporter: stack Assignee: John Leach Fix For: 2.0.0, 1.1.0, 0.98.13 Attachments: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, HBASE-12148.txt, HBASE-12148V2.txt, Screen Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-13427: --- Status: Open (was: Patch Available) Fixing checkstyle. HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, SetFromMap.add_Runnable.tiff Original Estimate: 2h Remaining Estimate: 2h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577884#comment-14577884 ] John Leach commented on HBASE-13378: Can someone explain this one to me? One interesting bit is: Now we won't hold on versions of Cells read by an READ_UNCOMMITTED scanner (i.e. even while the scanner is active those cells can be removed by a flush or a compaction). The only bit I changed is whether you are in the scannerReadPts map and unless my IDE is acting goofy, I do not see usages outside of RegionScannerImpl. Lars? RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.patch, HBASE-13378.txt Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14574735#comment-14574735 ] John Leach commented on HBASE-13378: I will take a closer look this weekend and see if I can get it to not change the guarantee while removing the synchronization. RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.patch, HBASE-13378.txt Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14570992#comment-14570992 ] John Leach commented on HBASE-13378: Seems like a nit IMO when you are comparing it to synchronizing all gets/scans in HBase... RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.patch, HBASE-13378.txt Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13420) RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
[ https://issues.apache.org/jira/browse/HBASE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14571253#comment-14571253 ] John Leach commented on HBASE-13420: Andrew, Sorry for the delay, I have been jumping around a bit. We just tested your patch during a data load of the LINE_ITEM table for the TPCC benchmark. Your change removed 140 seconds of blocked CPU for a 30M row load. Regards, John RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load --- Key: HBASE-13420 URL: https://issues.apache.org/jira/browse/HBASE-13420 Project: HBase Issue Type: Improvement Reporter: John Leach Assignee: Andrew Purtell Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1 Attachments: 1M-0.98.12.svg, 1M-0.98.13-SNAPSHOT.svg, HBASE-13420.patch, HBASE-13420.txt, hbase-13420.tar.gz, offerExecutionLatency.tiff Original Estimate: 3h Remaining Estimate: 3h The ArrayBlockingQueue blocks threads for 20s during a performance run focusing on creating numerous small scans. I see a buffer size of (100) private final BlockingQueueLong coprocessorTimeNanos = new ArrayBlockingQueueLong( LATENCY_BUFFER_SIZE); and then I see a drain coming from MetricsRegionWrapperImpl with 45 second executor HRegionMetricsWrapperRunable RegionCoprocessorHost#getCoprocessorExecutionStatistics() RegionCoprocessorHost#getExecutionLatenciesNanos() Am I missing something? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14573477#comment-14573477 ] John Leach commented on HBASE-13378: Lars do you have a codeline where the decision on the flush and compaction is made? Maybe I can use a different mechanism there? RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.patch, HBASE-13378.txt Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13378) RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels
[ https://issues.apache.org/jira/browse/HBASE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582547#comment-14582547 ] John Leach commented on HBASE-13378: Can you point me to the line of code that uses the scannerReadPoints for determining whether to flush or compact the data? RegionScannerImpl synchronized for READ_UNCOMMITTED Isolation Levels Key: HBASE-13378 URL: https://issues.apache.org/jira/browse/HBASE-13378 Project: HBase Issue Type: New Feature Reporter: John Leach Assignee: John Leach Priority: Minor Attachments: HBASE-13378.patch, HBASE-13378.txt Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h This block of code below coupled with the close method could be changed so that READ_UNCOMMITTED does not synchronize. {CODE:JAVA} // synchronize on scannerReadPoints so that nobody calculates // getSmallestReadPoint, before scannerReadPoints is updated. IsolationLevel isolationLevel = scan.getIsolationLevel(); synchronized(scannerReadPoints) { this.readPt = getReadpoint(isolationLevel); scannerReadPoints.put(this, this.readPt); } {CODE} This hotspots for me under heavy get requests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-13427) HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner
[ https://issues.apache.org/jira/browse/HBASE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14571364#comment-14571364 ] John Leach commented on HBASE-13427: Sorry for the delay, getting back to this one. Here is a synopsis of the load profile. We are batching and writing around 100K puts per second on a regionserver. Each of these puts has to perform a simple get of the row it is attempting to write. Once we apply HBASE-13378, the addChangedReaderObserver immediately starts to hotspot. Regards, John Leach HStore#addChangedReaderObserver hotspots due to missing hashCode and equals on StoreScanner --- Key: HBASE-13427 URL: https://issues.apache.org/jira/browse/HBASE-13427 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Attachments: 13427-v4.txt, HBASE-13427.patch, HBASE-13427_CURLY_BRACES.patch, HBASE-13427_V3.patch, SetFromMap.add_Runnable.tiff, perftest.hstore.changedReadObserver.txt Original Estimate: 2h Time Spent: 2h Remaining Estimate: 0h Please see attached graph... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12912) StoreScanner calls Configuration for Boolean Check on each initialization
[ https://issues.apache.org/jira/browse/HBASE-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14630216#comment-14630216 ] John Leach commented on HBASE-12912: Let me take a shot at it. StoreScanner calls Configuration for Boolean Check on each initialization - Key: HBASE-12912 URL: https://issues.apache.org/jira/browse/HBASE-12912 Project: HBase Issue Type: Bug Reporter: John Leach Assignee: John Leach Fix For: 2.0.0, 0.98.14, 1.1.2, 1.3.0, 1.2.1, 1.0.3 Attachments: StoreScannerStall.tiff Original Estimate: 1h Remaining Estimate: 1h There is a clear CPU drain and iterator creation when creating store scanners under high load. Splice was running a TPCC test of our database and we are seeing object creation and CPU waste on the boolean check Code Snippet... if (store != null ((HStore)store).getHRegion() != null store.getStorefilesCount() 1) { RegionServerServices rsService = ((HStore)store).getHRegion().getRegionServerServices(); if (rsService == null || !rsService.getConfiguration().getBoolean( STORESCANNER_PARALLEL_SEEK_ENABLE, false)) return; isParallelSeekEnabled = true; executor = rsService.getExecutorService(); } Will attach profile... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14540) Write Ahead Log Batching Optimization
[ https://issues.apache.org/jira/browse/HBASE-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14983350#comment-14983350 ] John Leach commented on HBASE-14540: I did it on Cloudera 5.4.1 for my test... > Write Ahead Log Batching Optimization > - > > Key: HBASE-14540 > URL: https://issues.apache.org/jira/browse/HBASE-14540 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach > Attachments: 14540.txt, HBaseWALBlockingWaitStrategy.java, writes.png > > > The new write ahead log mechanism seems to batch too few mutations when > running inside the disruptor. As we scaled our load up (many threads with > small writes), we saw the number of hdfs sync operations grow in concert with > the number of writes. Generally, one would expect the size of the batches to > grow but the number of actual sync operations to settle. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14540) Write Ahead Log Batching Optimization
John Leach created HBASE-14540: -- Summary: Write Ahead Log Batching Optimization Key: HBASE-14540 URL: https://issues.apache.org/jira/browse/HBASE-14540 Project: HBase Issue Type: Improvement Reporter: John Leach The new write ahead log mechanism seems to batch too few mutations when running inside the disruptor. As we scaled our load up (many threads with small writes), we saw the number of hdfs sync operations grow in concert with the number of writes. Generally, one would expect the size of the batches to grow but the number of actual sync operations to settle. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14540) Write Ahead Log Batching Optimization
[ https://issues.apache.org/jira/browse/HBASE-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-14540: --- Attachment: HBaseWALBlockingWaitStrategy.java Here is a modified Wait Strategy to apply to the disruptor. > Write Ahead Log Batching Optimization > - > > Key: HBASE-14540 > URL: https://issues.apache.org/jira/browse/HBASE-14540 > Project: HBase > Issue Type: Improvement >Reporter: John Leach > Attachments: HBaseWALBlockingWaitStrategy.java > > > The new write ahead log mechanism seems to batch too few mutations when > running inside the disruptor. As we scaled our load up (many threads with > small writes), we saw the number of hdfs sync operations grow in concert with > the number of writes. Generally, one would expect the size of the batches to > grow but the number of actual sync operations to settle. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14540) Write Ahead Log Batching Optimization
[ https://issues.apache.org/jira/browse/HBASE-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14941155#comment-14941155 ] John Leach commented on HBASE-14540: I did not run this on HBase based benchmarks but I did run this while we (SpliceMachine) were running TPCC benchmarks and it showed a significant improvement (2x). Also we were able to get rid of these types of error messages. {NOFORMAT} wal.FSHLog: Slow sync cost {NOFORMAT} > Write Ahead Log Batching Optimization > - > > Key: HBASE-14540 > URL: https://issues.apache.org/jira/browse/HBASE-14540 > Project: HBase > Issue Type: Improvement >Reporter: John Leach > Attachments: HBaseWALBlockingWaitStrategy.java > > > The new write ahead log mechanism seems to batch too few mutations when > running inside the disruptor. As we scaled our load up (many threads with > small writes), we saw the number of hdfs sync operations grow in concert with > the number of writes. Generally, one would expect the size of the batches to > grow but the number of actual sync operations to settle. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14540) Write Ahead Log Batching Optimization
[ https://issues.apache.org/jira/browse/HBASE-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14941327#comment-14941327 ] John Leach commented on HBASE-14540: Good point... Probably not a good idea then. > Write Ahead Log Batching Optimization > - > > Key: HBASE-14540 > URL: https://issues.apache.org/jira/browse/HBASE-14540 > Project: HBase > Issue Type: Improvement >Reporter: John Leach > Attachments: HBaseWALBlockingWaitStrategy.java > > > The new write ahead log mechanism seems to batch too few mutations when > running inside the disruptor. As we scaled our load up (many threads with > small writes), we saw the number of hdfs sync operations grow in concert with > the number of writes. Generally, one would expect the size of the batches to > grow but the number of actual sync operations to settle. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14540) Write Ahead Log Batching Optimization
[ https://issues.apache.org/jira/browse/HBASE-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14941346#comment-14941346 ] John Leach commented on HBASE-14540: Clearly, I think we should make it configurable. The problem with the "smart batching" we have is that it is designed for in-memory processing vs. a distributed WAL. I appreciate you thinking on this... > Write Ahead Log Batching Optimization > - > > Key: HBASE-14540 > URL: https://issues.apache.org/jira/browse/HBASE-14540 > Project: HBase > Issue Type: Improvement >Reporter: John Leach > Attachments: HBaseWALBlockingWaitStrategy.java > > > The new write ahead log mechanism seems to batch too few mutations when > running inside the disruptor. As we scaled our load up (many threads with > small writes), we saw the number of hdfs sync operations grow in concert with > the number of writes. Generally, one would expect the size of the batches to > grow but the number of actual sync operations to settle. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14540) Write Ahead Log Batching Optimization
[ https://issues.apache.org/jira/browse/HBASE-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14941309#comment-14941309 ] John Leach commented on HBASE-14540: Elliott, that is what I intuitively thought as well for a long time. A few implementations have changed my mind on this... FYI, Here is a nice article on smart batching and why it is important even in low latency systems. http://mechanical-sympathy.blogspot.com/2011/10/smart-batching.html Stack, let me know if I can help on the testing front. I know you put a ton of work in on the disruptor piece. > Write Ahead Log Batching Optimization > - > > Key: HBASE-14540 > URL: https://issues.apache.org/jira/browse/HBASE-14540 > Project: HBase > Issue Type: Improvement >Reporter: John Leach > Attachments: HBaseWALBlockingWaitStrategy.java > > > The new write ahead log mechanism seems to batch too few mutations when > running inside the disruptor. As we scaled our load up (many threads with > small writes), we saw the number of hdfs sync operations grow in concert with > the number of writes. Generally, one would expect the size of the batches to > grow but the number of actual sync operations to settle. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14509) Configurable sparse indexes?
[ https://issues.apache.org/jira/browse/HBASE-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944433#comment-14944433 ] John Leach commented on HBASE-14509: buddy index? I must have been sick that day in school. Hmm, how about Histograms, Frequent Items, and cardinality? They sure help an optimizer know which end is up. > Configurable sparse indexes? > > > Key: HBASE-14509 > URL: https://issues.apache.org/jira/browse/HBASE-14509 > Project: HBase > Issue Type: Brainstorming >Reporter: Lars Hofhansl > > This idea just popped up today and I wanted to record it for discussion: > What if we kept sparse column indexes per region or HFile or per configurable > range? > I.e. For any given CQ we record the lowest and highest value for a particular > range (HFile, Region, or a custom range like the Phoenix guide post). > By tweaking the size of these ranges we can control the size of the index, vs > its selectivity. > For example if we kept it by HFile we can almost instantly decide whether we > need scan a particular HFile at all to find a particular value in a Cell. > We can also collect min/max values for each n MB of data, for example when we > can the region the first time. Assuming ranges are large enough we can always > keep the index in memory together with the region. > Kind of a sparse local index. Might much easier than the buddy region stuff > we've been discussing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14266) RegionServers have a lock contention of Configuration.getProps
[ https://issues.apache.org/jira/browse/HBASE-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906422#comment-14906422 ] John Leach commented on HBASE-14266: HBASE-12912 is a similar issue. I am running tests right now for a fix, if clean will push patch. > RegionServers have a lock contention of Configuration.getProps > -- > > Key: HBASE-14266 > URL: https://issues.apache.org/jira/browse/HBASE-14266 > Project: HBase > Issue Type: Improvement > Components: regionserver > Environment: hbase-0.98.6-cdh5.3.1 >Reporter: Toshihiro Suzuki > Attachments: thread_dump.txt > > > Here's an extract from thread dump of the RegionServer of my cluster: > {code} > ... > Thread 267 (RW.default.readRpcServer.handler=184,queue=15,port=60020): > State: BLOCKED > Blocked count: 204028 > Waited count: 9702639 > Blocked on org.apache.hadoop.conf.Configuration@5a5e3da > Blocked by 250 (RW.default.readRpcServer.handler=167,queue=18,port=60020) > Stack: > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2250) > org.apache.hadoop.conf.Configuration.get(Configuration.java:861) > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:880) > org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1281) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:157) > > org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1804) > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1794) > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3852) > > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1952) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1938) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1915) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4872) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4847) > > org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2918) > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29921) > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116) > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96) > ... > {code} > There are such many threads in the thread dump. > I think that RegionServers have a lock contention which causes performance > issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14266) RegionServers have a lock contention of Configuration.getProps
[ https://issues.apache.org/jira/browse/HBASE-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-14266: --- Attachment: HBASE-14266.patch Adding Quick Patch, more configuration props should be added in over time. This is specifically for the synchronization issue. > RegionServers have a lock contention of Configuration.getProps > -- > > Key: HBASE-14266 > URL: https://issues.apache.org/jira/browse/HBASE-14266 > Project: HBase > Issue Type: Improvement > Components: regionserver > Environment: hbase-0.98.6-cdh5.3.1 >Reporter: Toshihiro Suzuki >Assignee: John Leach > Attachments: HBASE-14266.patch, thread_dump.txt > > > Here's an extract from thread dump of the RegionServer of my cluster: > {code} > ... > Thread 267 (RW.default.readRpcServer.handler=184,queue=15,port=60020): > State: BLOCKED > Blocked count: 204028 > Waited count: 9702639 > Blocked on org.apache.hadoop.conf.Configuration@5a5e3da > Blocked by 250 (RW.default.readRpcServer.handler=167,queue=18,port=60020) > Stack: > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2250) > org.apache.hadoop.conf.Configuration.get(Configuration.java:861) > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:880) > org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1281) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:157) > > org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1804) > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1794) > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3852) > > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1952) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1938) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1915) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4872) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4847) > > org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2918) > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29921) > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116) > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96) > ... > {code} > There are such many threads in the thread dump. > I think that RegionServers have a lock contention which causes performance > issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-14266) RegionServers have a lock contention of Configuration.getProps
[ https://issues.apache.org/jira/browse/HBASE-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-14266: -- Assignee: John Leach > RegionServers have a lock contention of Configuration.getProps > -- > > Key: HBASE-14266 > URL: https://issues.apache.org/jira/browse/HBASE-14266 > Project: HBase > Issue Type: Improvement > Components: regionserver > Environment: hbase-0.98.6-cdh5.3.1 >Reporter: Toshihiro Suzuki >Assignee: John Leach > Attachments: thread_dump.txt > > > Here's an extract from thread dump of the RegionServer of my cluster: > {code} > ... > Thread 267 (RW.default.readRpcServer.handler=184,queue=15,port=60020): > State: BLOCKED > Blocked count: 204028 > Waited count: 9702639 > Blocked on org.apache.hadoop.conf.Configuration@5a5e3da > Blocked by 250 (RW.default.readRpcServer.handler=167,queue=18,port=60020) > Stack: > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2250) > org.apache.hadoop.conf.Configuration.get(Configuration.java:861) > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:880) > org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1281) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:157) > > org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1804) > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1794) > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3852) > > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1952) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1938) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1915) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4872) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4847) > > org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2918) > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29921) > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116) > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96) > ... > {code} > There are such many threads in the thread dump. > I think that RegionServers have a lock contention which causes performance > issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14266) RegionServers have a lock contention of Configuration.getProps
[ https://issues.apache.org/jira/browse/HBASE-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906487#comment-14906487 ] John Leach commented on HBASE-14266: Assigned to Jeff, he will make changes based on feedback. > RegionServers have a lock contention of Configuration.getProps > -- > > Key: HBASE-14266 > URL: https://issues.apache.org/jira/browse/HBASE-14266 > Project: HBase > Issue Type: Improvement > Components: regionserver > Environment: hbase-0.98.6-cdh5.3.1 >Reporter: Toshihiro Suzuki >Assignee: Jeff Cunningham > Attachments: HBASE-14266.patch, thread_dump.txt > > > Here's an extract from thread dump of the RegionServer of my cluster: > {code} > ... > Thread 267 (RW.default.readRpcServer.handler=184,queue=15,port=60020): > State: BLOCKED > Blocked count: 204028 > Waited count: 9702639 > Blocked on org.apache.hadoop.conf.Configuration@5a5e3da > Blocked by 250 (RW.default.readRpcServer.handler=167,queue=18,port=60020) > Stack: > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2250) > org.apache.hadoop.conf.Configuration.get(Configuration.java:861) > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:880) > org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1281) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:157) > > org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1804) > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1794) > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3852) > > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1952) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1938) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1915) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4872) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4847) > > org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2918) > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29921) > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116) > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96) > ... > {code} > There are such many threads in the thread dump. > I think that RegionServers have a lock contention which causes performance > issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14266) RegionServers have a lock contention of Configuration.getProps
[ https://issues.apache.org/jira/browse/HBASE-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-14266: --- Assignee: Jeff Cunningham (was: John Leach) > RegionServers have a lock contention of Configuration.getProps > -- > > Key: HBASE-14266 > URL: https://issues.apache.org/jira/browse/HBASE-14266 > Project: HBase > Issue Type: Improvement > Components: regionserver > Environment: hbase-0.98.6-cdh5.3.1 >Reporter: Toshihiro Suzuki >Assignee: Jeff Cunningham > Attachments: HBASE-14266.patch, thread_dump.txt > > > Here's an extract from thread dump of the RegionServer of my cluster: > {code} > ... > Thread 267 (RW.default.readRpcServer.handler=184,queue=15,port=60020): > State: BLOCKED > Blocked count: 204028 > Waited count: 9702639 > Blocked on org.apache.hadoop.conf.Configuration@5a5e3da > Blocked by 250 (RW.default.readRpcServer.handler=167,queue=18,port=60020) > Stack: > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2250) > org.apache.hadoop.conf.Configuration.get(Configuration.java:861) > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:880) > org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1281) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:157) > > org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1804) > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1794) > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3852) > > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1952) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1938) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1915) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4872) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4847) > > org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2918) > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29921) > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116) > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96) > ... > {code} > There are such many threads in the thread dump. > I think that RegionServers have a lock contention which causes performance > issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14266) RegionServers have a lock contention of Configuration.getProps
[ https://issues.apache.org/jira/browse/HBASE-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906497#comment-14906497 ] John Leach commented on HBASE-14266: Jeff, I was slamming this in too fast. There is logical problems with the executorService, can you fix? > RegionServers have a lock contention of Configuration.getProps > -- > > Key: HBASE-14266 > URL: https://issues.apache.org/jira/browse/HBASE-14266 > Project: HBase > Issue Type: Improvement > Components: regionserver > Environment: hbase-0.98.6-cdh5.3.1 >Reporter: Toshihiro Suzuki >Assignee: Jeff Cunningham > Attachments: HBASE-14266.patch, thread_dump.txt > > > Here's an extract from thread dump of the RegionServer of my cluster: > {code} > ... > Thread 267 (RW.default.readRpcServer.handler=184,queue=15,port=60020): > State: BLOCKED > Blocked count: 204028 > Waited count: 9702639 > Blocked on org.apache.hadoop.conf.Configuration@5a5e3da > Blocked by 250 (RW.default.readRpcServer.handler=167,queue=18,port=60020) > Stack: > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2250) > org.apache.hadoop.conf.Configuration.get(Configuration.java:861) > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:880) > org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1281) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:138) > > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:157) > > org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1804) > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1794) > > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:3852) > > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1952) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1938) > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1915) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4872) > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4847) > > org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2918) > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29921) > org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031) > org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108) > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116) > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96) > ... > {code} > There are such many threads in the thread dump. > I think that RegionServers have a lock contention which causes performance > issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-12148: --- Attachment: TimeRangeTracker.tiff Here are the blocked CPU cycles via JProfiler... > Remove TimeRangeTracker as point of contention when many threads writing a > Store > > > Key: HBASE-12148 > URL: https://issues.apache.org/jira/browse/HBASE-12148 > Project: HBase > Issue Type: Sub-task > Components: Performance >Affects Versions: 2.0.0, 0.99.1 >Reporter: stack >Assignee: Walter Koetke > Fix For: 2.0.0, 1.3.0, 0.98.19 > > Attachments: > 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, > 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, > HBASE-12148-V3.patch, HBASE-12148.txt, HBASE-12148V2.txt, Screen Shot > 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png, > TimeRangeTracker.tiff > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store
[ https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15204442#comment-15204442 ] John Leach commented on HBASE-12148: Yes, readers lock each other out when significant load is applied. > Remove TimeRangeTracker as point of contention when many threads writing a > Store > > > Key: HBASE-12148 > URL: https://issues.apache.org/jira/browse/HBASE-12148 > Project: HBase > Issue Type: Sub-task > Components: Performance >Affects Versions: 2.0.0, 0.99.1 >Reporter: stack >Assignee: Walter Koetke > Fix For: 2.0.0, 1.3.0, 0.98.19 > > Attachments: > 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, > 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, > HBASE-12148-V3.patch, HBASE-12148.txt, HBASE-12148V2.txt, Screen Shot > 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png, > TimeRangeTracker.tiff > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15480) Bloom Filter check needs to be more efficient for array
[ https://issues.apache.org/jira/browse/HBASE-15480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222168#comment-15222168 ] John Leach commented on HBASE-15480: Yes. Two initial use cases would be the following... Snapshot Isolation: Use the bloom filter to check a bath of keys for an existing record (conflict detection). Bloom Join: Apply the bloom filters to restrict elements from the shuffle. > Bloom Filter check needs to be more efficient for array > --- > > Key: HBASE-15480 > URL: https://issues.apache.org/jira/browse/HBASE-15480 > Project: HBase > Issue Type: Improvement > Components: Performance >Affects Versions: 1.0.3 >Reporter: Walter Koetke >Assignee: Walter Koetke > Fix For: 1.0.4 > > Attachments: BloomFilterCheckOneByOne.tiff, > HBASE-15480-branch-1.0.patch > > > It is currently inefficient to do lots of bloom filter checks. Each check has > overhead like going to the block cache to retrieve the block and recording > metrics. It would be good to have one bloom filter check api that does a > bunch of checks without so much block retrieval and metrics updates. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15556) need extensible ConsistencyControl interface
[ https://issues.apache.org/jira/browse/HBASE-15556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222883#comment-15222883 ] John Leach commented on HBASE-15556: Focus of the patch: 1. Isolate the consistency model of HBase and define it via an interface (clarity). 2. Define HBase’s current consistency model in a concrete class (synchronized ordering of writing threads via single readPoint with all scans synchronized). 3. Make this interface extensible. We can then test different approaches and the effect on performance/correctness. At Splice Machine, we have written an implementation for Snapshot Isolation. We did this because both synchronizing of scans and ordering of writing threads becomes a bottleneck on our TPCC runs. If you would like to provide feedback on our model, we can post. I had asked Walt not to post because it would muddy the water with regards to this patch and it's value to the community. I was hoping this would enable someone to write a non-blocking implementation for HBase's more restrictive model. In general, I am a huge fan of making things into extensible components and I love the work in making stores more extensible (Striped, etc.). > need extensible ConsistencyControl interface > > > Key: HBASE-15556 > URL: https://issues.apache.org/jira/browse/HBASE-15556 > Project: HBase > Issue Type: Improvement >Affects Versions: 1.0.3 >Reporter: Walter Koetke >Assignee: Walter Koetke > Fix For: 1.0.4 > > Attachments: HBASE-15556-branch-1.0-02.patch, > HBASE-15556-branch-1.0.patch > > > The class MultiVersionConsistencyControl should be abstracted into an > interface ConsistencyControl so it can be extended by a configured custom > implementation class, with MultiVersionConsistencyControl as the default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.
[ https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15382365#comment-15382365 ] John Leach commented on HBASE-16210: I do not think the Splice Community (We open sourced today!) would use that approach. Do you have a design for what your transactional system would look like Enis or Sai? I would gladly comment on it. > Add Timestamp class to the hbase-common and Timestamp type to HTable. > - > > Key: HBASE-16210 > URL: https://issues.apache.org/jira/browse/HBASE-16210 > Project: HBase > Issue Type: Sub-task >Reporter: Sai Teja Ranuva >Assignee: Sai Teja Ranuva >Priority: Minor > Labels: patch, testing > Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, > HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, > HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, > HBASE-16210.master.7.patch, HBASE-16210.master.8.1.patch, > HBASE-16210.master.8.patch > > > This is a sub-issue of > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is > a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. > The main idea of HLC is described in > [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with > the motivation of adding it to HBase. > This patch in this issue takes the code from the patch in the parent. > The parent patch is pretty big to review at once. So, plan is to get code > reviewed in smaller patches and > in the process take suggestions and change things if necessary. > What is this patch/issue about ? > This issue attempts to add a timestamp class to hbase-common and timestamp > type to HTable. > This is a part of the attempt to get HLC into HBase. This patch does not > interfere with the current working of HBase. > Why Timestamp Class ? > Timestamp class can be as an abstraction to represent time in Hbase in 64 > bits. > It is just used for manipulating with the 64 bits of the timestamp and is not > concerned about the actual time. > There are three types of timestamps. System time, Custom and HLC. Each one of > it has methods to manipulate the 64 bits of timestamp. > HTable changes: Added a timestamp type property to HTable. This will help > HBase exist in conjunction with old type of timestamp and also the HLC which > will be introduced. The default is set to custom timestamp(current way of > usage of timestamp). default unset timestamp is also custom timestamp as it > should be so. The default timestamp will be changed to HLC when HLC feature > is introduced completely in HBase. > Check HBASE-16210.master.6.patch. > Suggestions are welcome. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16468) RowLocks use a list implementation instead of an array implementation
John Leach created HBASE-16468: -- Summary: RowLocks use a list implementation instead of an array implementation Key: HBASE-16468 URL: https://issues.apache.org/jira/browse/HBASE-16468 Project: HBase Issue Type: Bug Reporter: John Leach Little nit, but no reason to create the extra objects. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16468) RowLocks use a list implementation instead of an array implementation for doMiniBatchMutate
[ https://issues.apache.org/jira/browse/HBASE-16468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-16468: --- Summary: RowLocks use a list implementation instead of an array implementation for doMiniBatchMutate (was: RowLocks use a list implementation instead of an array implementation) > RowLocks use a list implementation instead of an array implementation for > doMiniBatchMutate > --- > > Key: HBASE-16468 > URL: https://issues.apache.org/jira/browse/HBASE-16468 > Project: HBase > Issue Type: Bug >Reporter: John Leach > > Little nit, but no reason to create the extra objects. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-16468) RowLocks use a list implementation instead of an array implementation for doMiniBatchMutate
[ https://issues.apache.org/jira/browse/HBASE-16468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-16468: -- Assignee: John Leach > RowLocks use a list implementation instead of an array implementation for > doMiniBatchMutate > --- > > Key: HBASE-16468 > URL: https://issues.apache.org/jira/browse/HBASE-16468 > Project: HBase > Issue Type: Bug >Reporter: John Leach >Assignee: John Leach > > Little nit, but no reason to create the extra objects. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-16484) Create an Interface defining MultiVersionConsistencyControl
[ https://issues.apache.org/jira/browse/HBASE-16484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-16484: -- Assignee: John Leach > Create an Interface defining MultiVersionConsistencyControl > > > Key: HBASE-16484 > URL: https://issues.apache.org/jira/browse/HBASE-16484 > Project: HBase > Issue Type: Bug >Reporter: John Leach >Assignee: John Leach >Priority: Minor > > Hopefully this will help clarify this critical component. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16484) Create an Interface defining MultiVersionConsistencyControl
John Leach created HBASE-16484: -- Summary: Create an Interface defining MultiVersionConsistencyControl Key: HBASE-16484 URL: https://issues.apache.org/jira/browse/HBASE-16484 Project: HBase Issue Type: Bug Reporter: John Leach Priority: Minor Hopefully this will help clarify this critical component. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16468) RowLocks use a list implementation instead of an array implementation for doMiniBatchMutate
[ https://issues.apache.org/jira/browse/HBASE-16468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-16468: --- Attachment: HBASE-16468.txt Patch > RowLocks use a list implementation instead of an array implementation for > doMiniBatchMutate > --- > > Key: HBASE-16468 > URL: https://issues.apache.org/jira/browse/HBASE-16468 > Project: HBase > Issue Type: Bug >Reporter: John Leach >Assignee: John Leach > Attachments: HBASE-16468.txt > > > Little nit, but no reason to create the extra objects. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16496) Hotspotting on SequenceIDAccounting during HLOG Performance Test
John Leach created HBASE-16496: -- Summary: Hotspotting on SequenceIDAccounting during HLOG Performance Test Key: HBASE-16496 URL: https://issues.apache.org/jira/browse/HBASE-16496 Project: HBase Issue Type: Bug Reporter: John Leach Priority: Trivial I was seeing this hotspot for me during my tests. Adding Pic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16496) Hotspotting on SequenceIDAccounting during HLOG Performance Test
[ https://issues.apache.org/jira/browse/HBASE-16496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-16496: --- Attachment: HashMap_Hotspot.tiff Hotspot on byte[] into hashmap > Hotspotting on SequenceIDAccounting during HLOG Performance Test > > > Key: HBASE-16496 > URL: https://issues.apache.org/jira/browse/HBASE-16496 > Project: HBase > Issue Type: Bug >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > Attachments: HashMap_Hotspot.tiff > > > I was seeing this hotspot for me during my tests. > Adding Pic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-16496) Hotspotting on SequenceIDAccounting during HLOG Performance Test
[ https://issues.apache.org/jira/browse/HBASE-16496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-16496: -- Assignee: John Leach > Hotspotting on SequenceIDAccounting during HLOG Performance Test > > > Key: HBASE-16496 > URL: https://issues.apache.org/jira/browse/HBASE-16496 > Project: HBase > Issue Type: Bug >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > > I was seeing this hotspot for me during my tests. > Adding Pic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16496) Hotspotting on SequenceIDAccounting during HLOG Performance Test
[ https://issues.apache.org/jira/browse/HBASE-16496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-16496: --- Attachment: HBASE-16496.patch Patch File. > Hotspotting on SequenceIDAccounting during HLOG Performance Test > > > Key: HBASE-16496 > URL: https://issues.apache.org/jira/browse/HBASE-16496 > Project: HBase > Issue Type: Bug >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > Attachments: HBASE-16496.patch, HashMap_Hotspot.tiff > > > I was seeing this hotspot for me during my tests. > Adding Pic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16496) Hotspotting on SequenceIDAccounting during HLOG Performance Test
[ https://issues.apache.org/jira/browse/HBASE-16496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-16496: --- Status: Patch Available (was: Open) > Hotspotting on SequenceIDAccounting during HLOG Performance Test > > > Key: HBASE-16496 > URL: https://issues.apache.org/jira/browse/HBASE-16496 > Project: HBase > Issue Type: Bug >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > Attachments: HBASE-16496.patch, HashMap_Hotspot.tiff > > > I was seeing this hotspot for me during my tests. > Adding Pic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17069) RegionServer writes invalid META entries for split daughters in some circumstances
[ https://issues.apache.org/jira/browse/HBASE-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15664500#comment-15664500 ] John Leach commented on HBASE-17069: Andrew, I just hit this issue on a stress test that does self inserts... In case you were wondering if others were hitting it. https://splice.atlassian.net/browse/SPLICE-1155 > RegionServer writes invalid META entries for split daughters in some > circumstances > -- > > Key: HBASE-17069 > URL: https://issues.apache.org/jira/browse/HBASE-17069 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.4 >Reporter: Andrew Purtell >Priority: Critical > Attachments: daughter_1_d55ef81c2f8299abbddfce0445067830.log, > daughter_2_08629d59564726da2497f70451aafcdb.log, logs.tar.gz, > parent-393d2bfd8b1c52ce08540306659624f2.log > > > I have been seeing frequent ITBLL failures testing various versions of 1.2.x. > Over the lifetime of 1.2.x the following issues have been fixed: > - HBASE-15315 (Remove always set super user call as high priority) > - HBASE-16093 (Fix splits failed before creating daughter regions leave meta > inconsistent) > And this one is pending: > - HBASE-17044 (Fix merge failed before creating merged region leaves meta > inconsistent) > I can apply all of the above to branch-1.2 and still see this failure: > *The life of stillborn region d55ef81c2f8299abbddfce0445067830* > *Master sees SPLITTING_NEW* > {noformat} > 2016-11-08 04:23:21,186 INFO [AM.ZK.Worker-pool2-t82] master.RegionStates: > Transition null to {d55ef81c2f8299abbddfce0445067830 state=SPLITTING_NEW, > ts=1478579001186, server=node-3.cluster,16020,1478578389506} > {noformat} > *The RegionServer creates it* > {noformat} > 2016-11-08 04:23:26,035 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for GomnU: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,038 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for big: blockCache=LruBlockCache{blockCount=34, > currentSize=14996112, freeSize=12823716208, maxSize=12838712320, > heapSize=14996112, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,442 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for meta: blockCache=LruBlockCache{blockCount=63, > currentSize=17187656, freeSize=12821524664, maxSize=12838712320, > heapSize=17187656, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,713 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for nwmrW: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440, freeSize=12819533880, maxSize=12838712320, > heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,715 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for piwbr: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440, freeSize=12819533880, maxSize=12838712320, > heapSize=19178440, minSize=12196776960, minFactor=0.95, multiSize=6098388480, > multiFactor=0.5, singleSize=3049194240, singleFactor=0.25}, > cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, > cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, > prefetchOnOpen=false > 2016-11-08 04:23:26,717 INFO > [StoreOpener-d55ef81c2f8299abbddfce0445067830-1] hfile.CacheConfig: Created > cacheConfig for tiny: blockCache=LruBlockCache{blockCount=96, > currentSize=19178440,
[jira] [Commented] (HBASE-17209) manual Array to Collection Copy: Automated
[ https://issues.apache.org/jira/browse/HBASE-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15709084#comment-15709084 ] John Leach commented on HBASE-17209: Oops. I will fix... > manual Array to Collection Copy: Automated > -- > > Key: HBASE-17209 > URL: https://issues.apache.org/jira/browse/HBASE-17209 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Priority: Trivial > Attachments: HBASE-17209.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17216) A Few Fields Can Be Safely Made Static
John Leach created HBASE-17216: -- Summary: A Few Fields Can Be Safely Made Static Key: HBASE-17216 URL: https://issues.apache.org/jira/browse/HBASE-17216 Project: HBase Issue Type: Improvement Reporter: John Leach Automated Test... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-17216) A Few Fields Can Be Safely Made Static
[ https://issues.apache.org/jira/browse/HBASE-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach reassigned HBASE-17216: -- Assignee: John Leach > A Few Fields Can Be Safely Made Static > -- > > Key: HBASE-17216 > URL: https://issues.apache.org/jira/browse/HBASE-17216 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach > > Automated Test... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17216) A Few Fields Can Be Safely Made Static
[ https://issues.apache.org/jira/browse/HBASE-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-17216: --- Attachment: HBASE-17216.patch > A Few Fields Can Be Safely Made Static > -- > > Key: HBASE-17216 > URL: https://issues.apache.org/jira/browse/HBASE-17216 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach > Attachments: HBASE-17216.patch > > > Automated Test... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17216) A Few Fields Can Be Safely Made Static
[ https://issues.apache.org/jira/browse/HBASE-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-17216: --- Status: Patch Available (was: Open) > A Few Fields Can Be Safely Made Static > -- > > Key: HBASE-17216 > URL: https://issues.apache.org/jira/browse/HBASE-17216 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach > Attachments: HBASE-17216.patch > > > Automated Test... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17207) Arrays.asList() with too few arguments
[ https://issues.apache.org/jira/browse/HBASE-17207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-17207: --- Status: Patch Available (was: Open) > Arrays.asList() with too few arguments > -- > > Key: HBASE-17207 > URL: https://issues.apache.org/jira/browse/HBASE-17207 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > Attachments: HBASE-17202.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17207) Arrays.asList() with too few arguments
[ https://issues.apache.org/jira/browse/HBASE-17207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-17207: --- Attachment: HBASE-17202.patch > Arrays.asList() with too few arguments > -- > > Key: HBASE-17207 > URL: https://issues.apache.org/jira/browse/HBASE-17207 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > Attachments: HBASE-17202.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17208) Manual Array Copy Cleanup: Automated
[ https://issues.apache.org/jira/browse/HBASE-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-17208: --- Attachment: HBASE-17208.patch > Manual Array Copy Cleanup: Automated > > > Key: HBASE-17208 > URL: https://issues.apache.org/jira/browse/HBASE-17208 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > Attachments: HBASE-17208.patch > > > Remove Manual Array Copies: Automated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17208) Manual Array Copy Cleanup: Automated
[ https://issues.apache.org/jira/browse/HBASE-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Leach updated HBASE-17208: --- Status: Patch Available (was: Open) > Manual Array Copy Cleanup: Automated > > > Key: HBASE-17208 > URL: https://issues.apache.org/jira/browse/HBASE-17208 > Project: HBase > Issue Type: Improvement >Reporter: John Leach >Assignee: John Leach >Priority: Trivial > Attachments: HBASE-17208.patch > > > Remove Manual Array Copies: Automated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17209) manual Array to Collection Copy: Automated
John Leach created HBASE-17209: -- Summary: manual Array to Collection Copy: Automated Key: HBASE-17209 URL: https://issues.apache.org/jira/browse/HBASE-17209 Project: HBase Issue Type: Improvement Reporter: John Leach Priority: Trivial -- This message was sent by Atlassian JIRA (v6.3.4#6332)