[jira] [Resolved] (HBASE-24450) There was a partial failure due to IO when attempting to load
[ https://issues.apache.org/jira/browse/HBASE-24450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yukunpeng resolved HBASE-24450. --- Resolution: Not A Problem > There was a partial failure due to IO when attempting to load > - > > Key: HBASE-24450 > URL: https://issues.apache.org/jira/browse/HBASE-24450 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.4 >Reporter: yukunpeng >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24855) Why Hregion's lock method is not public?
yukunpeng created HBASE-24855: - Summary: Why Hregion's lock method is not public? Key: HBASE-24855 URL: https://issues.apache.org/jira/browse/HBASE-24855 Project: HBase Issue Type: Wish Reporter: yukunpeng {code:java} private void lock(final Lock lock) throws RegionTooBusyException, InterruptedIOException { lock(lock, 1); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24508) Why ProtobufUtil does not set scan's limit
[ https://issues.apache.org/jira/browse/HBASE-24508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127213#comment-17127213 ] yukunpeng commented on HBASE-24508: --- {code:java} org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil{code} > Why ProtobufUtil does not set scan's limit > > > Key: HBASE-24508 > URL: https://issues.apache.org/jira/browse/HBASE-24508 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.5 >Reporter: yukunpeng >Priority: Trivial > > {code:java} > //ProtobufUtil > /** > * Convert a client Scan to a protocol buffer Scan > * > * @param scan the client Scan to convert > * @return the converted protocol buffer Scan > * @throws IOException > */ > public static ClientProtos.Scan toScan( > final Scan scan) throws IOException { > ClientProtos.Scan.Builder scanBuilder = > ClientProtos.Scan.newBuilder(); > scanBuilder.setCacheBlocks(scan.getCacheBlocks()); > if (scan.getBatch() > 0) { > scanBuilder.setBatchSize(scan.getBatch()); > } > if (scan.getMaxResultSize() > 0) { > scanBuilder.setMaxResultSize(scan.getMaxResultSize()); > } > if (scan.isSmall()) { > scanBuilder.setSmall(scan.isSmall()); > } > if (scan.getAllowPartialResults()) { > scanBuilder.setAllowPartialResults(scan.getAllowPartialResults()); > } > Boolean loadColumnFamiliesOnDemand = > scan.getLoadColumnFamiliesOnDemandValue(); > if (loadColumnFamiliesOnDemand != null) { > scanBuilder.setLoadColumnFamiliesOnDemand(loadColumnFamiliesOnDemand); > } > scanBuilder.setMaxVersions(scan.getMaxVersions()); > scan.getColumnFamilyTimeRange().forEach((cf, timeRange) -> { > scanBuilder.addCfTimeRange(HBaseProtos.ColumnFamilyTimeRange.newBuilder() > .setColumnFamily(UnsafeByteOperations.unsafeWrap(cf)) > .setTimeRange(toTimeRange(timeRange)) > .build()); > }); > scanBuilder.setTimeRange(ProtobufUtil.toTimeRange(scan.getTimeRange())); > Map attributes = scan.getAttributesMap(); > if (!attributes.isEmpty()) { > NameBytesPair.Builder attributeBuilder = NameBytesPair.newBuilder(); > for (Map.Entry attribute: attributes.entrySet()) { > attributeBuilder.setName(attribute.getKey()); > > attributeBuilder.setValue(UnsafeByteOperations.unsafeWrap(attribute.getValue())); > scanBuilder.addAttribute(attributeBuilder.build()); > } > } > byte[] startRow = scan.getStartRow(); > if (startRow != null && startRow.length > 0) { > scanBuilder.setStartRow(UnsafeByteOperations.unsafeWrap(startRow)); > } > byte[] stopRow = scan.getStopRow(); > if (stopRow != null && stopRow.length > 0) { > scanBuilder.setStopRow(UnsafeByteOperations.unsafeWrap(stopRow)); > } > if (scan.hasFilter()) { > scanBuilder.setFilter(ProtobufUtil.toFilter(scan.getFilter())); > } > if (scan.hasFamilies()) { > Column.Builder columnBuilder = Column.newBuilder(); > for (Map.Entry> > family: scan.getFamilyMap().entrySet()) { > > columnBuilder.setFamily(UnsafeByteOperations.unsafeWrap(family.getKey())); > NavigableSet qualifiers = family.getValue(); > columnBuilder.clearQualifier(); > if (qualifiers != null && qualifiers.size() > 0) { > for (byte [] qualifier: qualifiers) { > > columnBuilder.addQualifier(UnsafeByteOperations.unsafeWrap(qualifier)); > } > } > scanBuilder.addColumn(columnBuilder.build()); > } > } > if (scan.getMaxResultsPerColumnFamily() >= 0) { > scanBuilder.setStoreLimit(scan.getMaxResultsPerColumnFamily()); > } > if (scan.getRowOffsetPerColumnFamily() > 0) { > scanBuilder.setStoreOffset(scan.getRowOffsetPerColumnFamily()); > } > if (scan.isReversed()) { > scanBuilder.setReversed(scan.isReversed()); > } > if (scan.getConsistency() == Consistency.TIMELINE) { > scanBuilder.setConsistency(toConsistency(scan.getConsistency())); > } > if (scan.getCaching() > 0) { > scanBuilder.setCaching(scan.getCaching()); > } > long mvccReadPoint = PackagePrivateFieldAccessor.getMvccReadPoint(scan); > if (mvccReadPoint > 0) { > scanBuilder.setMvccReadPoint(mvccReadPoint); > } > if (!scan.includeStartRow()) { > scanBuilder.setIncludeStartRow(false); > } > scanBuilder.setIncludeStopRow(scan.includeStopRow()); > if (scan.getReadType() != Scan.ReadType.DEFAULT) { > scanBuilder.setReadType(toReadType(scan.getReadType())); > } > if (scan.isNeedCursorResult()) { > scanBuilder.setNeedCursorResult(true); > } > return scanBuilder.build(); > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24508) Why ProtobufUtil does not set scan's limit
[ https://issues.apache.org/jira/browse/HBASE-24508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yukunpeng resolved HBASE-24508. --- Resolution: Not A Bug > Why ProtobufUtil does not set scan's limit > > > Key: HBASE-24508 > URL: https://issues.apache.org/jira/browse/HBASE-24508 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.5 >Reporter: yukunpeng >Priority: Trivial > > {code:java} > //ProtobufUtil > /** > * Convert a client Scan to a protocol buffer Scan > * > * @param scan the client Scan to convert > * @return the converted protocol buffer Scan > * @throws IOException > */ > public static ClientProtos.Scan toScan( > final Scan scan) throws IOException { > ClientProtos.Scan.Builder scanBuilder = > ClientProtos.Scan.newBuilder(); > scanBuilder.setCacheBlocks(scan.getCacheBlocks()); > if (scan.getBatch() > 0) { > scanBuilder.setBatchSize(scan.getBatch()); > } > if (scan.getMaxResultSize() > 0) { > scanBuilder.setMaxResultSize(scan.getMaxResultSize()); > } > if (scan.isSmall()) { > scanBuilder.setSmall(scan.isSmall()); > } > if (scan.getAllowPartialResults()) { > scanBuilder.setAllowPartialResults(scan.getAllowPartialResults()); > } > Boolean loadColumnFamiliesOnDemand = > scan.getLoadColumnFamiliesOnDemandValue(); > if (loadColumnFamiliesOnDemand != null) { > scanBuilder.setLoadColumnFamiliesOnDemand(loadColumnFamiliesOnDemand); > } > scanBuilder.setMaxVersions(scan.getMaxVersions()); > scan.getColumnFamilyTimeRange().forEach((cf, timeRange) -> { > scanBuilder.addCfTimeRange(HBaseProtos.ColumnFamilyTimeRange.newBuilder() > .setColumnFamily(UnsafeByteOperations.unsafeWrap(cf)) > .setTimeRange(toTimeRange(timeRange)) > .build()); > }); > scanBuilder.setTimeRange(ProtobufUtil.toTimeRange(scan.getTimeRange())); > Map attributes = scan.getAttributesMap(); > if (!attributes.isEmpty()) { > NameBytesPair.Builder attributeBuilder = NameBytesPair.newBuilder(); > for (Map.Entry attribute: attributes.entrySet()) { > attributeBuilder.setName(attribute.getKey()); > > attributeBuilder.setValue(UnsafeByteOperations.unsafeWrap(attribute.getValue())); > scanBuilder.addAttribute(attributeBuilder.build()); > } > } > byte[] startRow = scan.getStartRow(); > if (startRow != null && startRow.length > 0) { > scanBuilder.setStartRow(UnsafeByteOperations.unsafeWrap(startRow)); > } > byte[] stopRow = scan.getStopRow(); > if (stopRow != null && stopRow.length > 0) { > scanBuilder.setStopRow(UnsafeByteOperations.unsafeWrap(stopRow)); > } > if (scan.hasFilter()) { > scanBuilder.setFilter(ProtobufUtil.toFilter(scan.getFilter())); > } > if (scan.hasFamilies()) { > Column.Builder columnBuilder = Column.newBuilder(); > for (Map.Entry> > family: scan.getFamilyMap().entrySet()) { > > columnBuilder.setFamily(UnsafeByteOperations.unsafeWrap(family.getKey())); > NavigableSet qualifiers = family.getValue(); > columnBuilder.clearQualifier(); > if (qualifiers != null && qualifiers.size() > 0) { > for (byte [] qualifier: qualifiers) { > > columnBuilder.addQualifier(UnsafeByteOperations.unsafeWrap(qualifier)); > } > } > scanBuilder.addColumn(columnBuilder.build()); > } > } > if (scan.getMaxResultsPerColumnFamily() >= 0) { > scanBuilder.setStoreLimit(scan.getMaxResultsPerColumnFamily()); > } > if (scan.getRowOffsetPerColumnFamily() > 0) { > scanBuilder.setStoreOffset(scan.getRowOffsetPerColumnFamily()); > } > if (scan.isReversed()) { > scanBuilder.setReversed(scan.isReversed()); > } > if (scan.getConsistency() == Consistency.TIMELINE) { > scanBuilder.setConsistency(toConsistency(scan.getConsistency())); > } > if (scan.getCaching() > 0) { > scanBuilder.setCaching(scan.getCaching()); > } > long mvccReadPoint = PackagePrivateFieldAccessor.getMvccReadPoint(scan); > if (mvccReadPoint > 0) { > scanBuilder.setMvccReadPoint(mvccReadPoint); > } > if (!scan.includeStartRow()) { > scanBuilder.setIncludeStartRow(false); > } > scanBuilder.setIncludeStopRow(scan.includeStopRow()); > if (scan.getReadType() != Scan.ReadType.DEFAULT) { > scanBuilder.setReadType(toReadType(scan.getReadType())); > } > if (scan.isNeedCursorResult()) { > scanBuilder.setNeedCursorResult(true); > } > return scanBuilder.build(); > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24508) Why ProtobufUtil does not set scan's limit
yukunpeng created HBASE-24508: - Summary: Why ProtobufUtil does not set scan's limit Key: HBASE-24508 URL: https://issues.apache.org/jira/browse/HBASE-24508 Project: HBase Issue Type: Bug Affects Versions: 2.2.5 Reporter: yukunpeng {code:java} //ProtobufUtil /** * Convert a client Scan to a protocol buffer Scan * * @param scan the client Scan to convert * @return the converted protocol buffer Scan * @throws IOException */ public static ClientProtos.Scan toScan( final Scan scan) throws IOException { ClientProtos.Scan.Builder scanBuilder = ClientProtos.Scan.newBuilder(); scanBuilder.setCacheBlocks(scan.getCacheBlocks()); if (scan.getBatch() > 0) { scanBuilder.setBatchSize(scan.getBatch()); } if (scan.getMaxResultSize() > 0) { scanBuilder.setMaxResultSize(scan.getMaxResultSize()); } if (scan.isSmall()) { scanBuilder.setSmall(scan.isSmall()); } if (scan.getAllowPartialResults()) { scanBuilder.setAllowPartialResults(scan.getAllowPartialResults()); } Boolean loadColumnFamiliesOnDemand = scan.getLoadColumnFamiliesOnDemandValue(); if (loadColumnFamiliesOnDemand != null) { scanBuilder.setLoadColumnFamiliesOnDemand(loadColumnFamiliesOnDemand); } scanBuilder.setMaxVersions(scan.getMaxVersions()); scan.getColumnFamilyTimeRange().forEach((cf, timeRange) -> { scanBuilder.addCfTimeRange(HBaseProtos.ColumnFamilyTimeRange.newBuilder() .setColumnFamily(UnsafeByteOperations.unsafeWrap(cf)) .setTimeRange(toTimeRange(timeRange)) .build()); }); scanBuilder.setTimeRange(ProtobufUtil.toTimeRange(scan.getTimeRange())); Map attributes = scan.getAttributesMap(); if (!attributes.isEmpty()) { NameBytesPair.Builder attributeBuilder = NameBytesPair.newBuilder(); for (Map.Entry attribute: attributes.entrySet()) { attributeBuilder.setName(attribute.getKey()); attributeBuilder.setValue(UnsafeByteOperations.unsafeWrap(attribute.getValue())); scanBuilder.addAttribute(attributeBuilder.build()); } } byte[] startRow = scan.getStartRow(); if (startRow != null && startRow.length > 0) { scanBuilder.setStartRow(UnsafeByteOperations.unsafeWrap(startRow)); } byte[] stopRow = scan.getStopRow(); if (stopRow != null && stopRow.length > 0) { scanBuilder.setStopRow(UnsafeByteOperations.unsafeWrap(stopRow)); } if (scan.hasFilter()) { scanBuilder.setFilter(ProtobufUtil.toFilter(scan.getFilter())); } if (scan.hasFamilies()) { Column.Builder columnBuilder = Column.newBuilder(); for (Map.Entry> family: scan.getFamilyMap().entrySet()) { columnBuilder.setFamily(UnsafeByteOperations.unsafeWrap(family.getKey())); NavigableSet qualifiers = family.getValue(); columnBuilder.clearQualifier(); if (qualifiers != null && qualifiers.size() > 0) { for (byte [] qualifier: qualifiers) { columnBuilder.addQualifier(UnsafeByteOperations.unsafeWrap(qualifier)); } } scanBuilder.addColumn(columnBuilder.build()); } } if (scan.getMaxResultsPerColumnFamily() >= 0) { scanBuilder.setStoreLimit(scan.getMaxResultsPerColumnFamily()); } if (scan.getRowOffsetPerColumnFamily() > 0) { scanBuilder.setStoreOffset(scan.getRowOffsetPerColumnFamily()); } if (scan.isReversed()) { scanBuilder.setReversed(scan.isReversed()); } if (scan.getConsistency() == Consistency.TIMELINE) { scanBuilder.setConsistency(toConsistency(scan.getConsistency())); } if (scan.getCaching() > 0) { scanBuilder.setCaching(scan.getCaching()); } long mvccReadPoint = PackagePrivateFieldAccessor.getMvccReadPoint(scan); if (mvccReadPoint > 0) { scanBuilder.setMvccReadPoint(mvccReadPoint); } if (!scan.includeStartRow()) { scanBuilder.setIncludeStartRow(false); } scanBuilder.setIncludeStopRow(scan.includeStopRow()); if (scan.getReadType() != Scan.ReadType.DEFAULT) { scanBuilder.setReadType(toReadType(scan.getReadType())); } if (scan.isNeedCursorResult()) { scanBuilder.setNeedCursorResult(true); } return scanBuilder.build(); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-24450) There was a partial failure due to IO when attempting to load
[ https://issues.apache.org/jira/browse/HBASE-24450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118274#comment-17118274 ] yukunpeng edited comment on HBASE-24450 at 5/28/20, 3:51 AM: - When bulk load some hfiles, IO exception occurred in hadoop, how to retry {code:java} regionserver.SecureBulkLoadManager: Failed to complete bulk load java.io.IOException: Failed rename of hdfs://node1:9000/tmp/load/ontime/715505590901669888/f/8ccf9d873d114d248efd497c15a2007a to hdfs://node1:9000/Hydrogen/data/default/ontime/82c3619f9a6b2c3ddc73092deb239a85/f/4d6fa87ae5e24899b5a84dd944fa3c96_SeqId_84_ at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:480) at org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:891) {code} {code:java} 2020-05-28 11:48:47,301 ERROR [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16020] regionserver.SecureBulkLoadManager: Failed to complete bulk load java.io.IOException: Failed rename of hdfs://node1:9000/tmp/load/ontime/715505590901669888/f/b137252855764f51be61746b0bd8179a to hdfs://node1:9000/Hydrogen/data/default/ontime/82c3619f9a6b2c3ddc73092deb239a85/f/acf774e955064d319d7485168f6fcf49_SeqId_78_ at org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:480) {code} was (Author: yukunpeng): When bulk load some hfiles, IO exception occurred in hadoop, how to retry > There was a partial failure due to IO when attempting to load > - > > Key: HBASE-24450 > URL: https://issues.apache.org/jira/browse/HBASE-24450 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.4 >Reporter: yukunpeng >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24450) There was a partial failure due to IO when attempting to load
[ https://issues.apache.org/jira/browse/HBASE-24450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yukunpeng updated HBASE-24450: -- Attachment: (was: image-2020-05-28-11-44-14-142.png) > There was a partial failure due to IO when attempting to load > - > > Key: HBASE-24450 > URL: https://issues.apache.org/jira/browse/HBASE-24450 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.4 >Reporter: yukunpeng >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24450) There was a partial failure due to IO when attempting to load
[ https://issues.apache.org/jira/browse/HBASE-24450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118274#comment-17118274 ] yukunpeng commented on HBASE-24450: --- When bulk load some hfiles, IO exception occurred in hadoop, how to retry > There was a partial failure due to IO when attempting to load > - > > Key: HBASE-24450 > URL: https://issues.apache.org/jira/browse/HBASE-24450 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.4 >Reporter: yukunpeng >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Issue Comment Deleted] (HBASE-24450) There was a partial failure due to IO when attempting to load
[ https://issues.apache.org/jira/browse/HBASE-24450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yukunpeng updated HBASE-24450: -- Comment: was deleted (was: !image-2020-05-28-11-44-14-142.png!) > There was a partial failure due to IO when attempting to load > - > > Key: HBASE-24450 > URL: https://issues.apache.org/jira/browse/HBASE-24450 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.4 >Reporter: yukunpeng >Priority: Minor > Attachments: image-2020-05-28-11-44-14-142.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24450) There was a partial failure due to IO when attempting to load
yukunpeng created HBASE-24450: - Summary: There was a partial failure due to IO when attempting to load Key: HBASE-24450 URL: https://issues.apache.org/jira/browse/HBASE-24450 Project: HBase Issue Type: Bug Affects Versions: 2.2.4 Reporter: yukunpeng -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24450) There was a partial failure due to IO when attempting to load
[ https://issues.apache.org/jira/browse/HBASE-24450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118269#comment-17118269 ] yukunpeng commented on HBASE-24450: --- !image-2020-05-28-11-44-14-142.png! > There was a partial failure due to IO when attempting to load > - > > Key: HBASE-24450 > URL: https://issues.apache.org/jira/browse/HBASE-24450 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.4 >Reporter: yukunpeng >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24450) There was a partial failure due to IO when attempting to load
[ https://issues.apache.org/jira/browse/HBASE-24450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yukunpeng updated HBASE-24450: -- Attachment: image-2020-05-28-11-44-14-142.png > There was a partial failure due to IO when attempting to load > - > > Key: HBASE-24450 > URL: https://issues.apache.org/jira/browse/HBASE-24450 > Project: HBase > Issue Type: Bug >Affects Versions: 2.2.4 >Reporter: yukunpeng >Priority: Minor > Attachments: image-2020-05-28-11-44-14-142.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005)