[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r554147880 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreContext.java ## @@ -0,0 +1,194 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.function.Supplier; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CellComparator; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.io.HeapSize; +import org.apache.hadoop.hbase.io.crypto.Encryption; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.util.ClassSize; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * This carries the immutable information and references on some of the meta data about the HStore. + * This meta data can be used across the HFileWriter/Readers and other HStore consumers without the + * need of passing around the complete store. + */ +@InterfaceAudience.Private +public final class StoreContext implements HeapSize { + public static final long FIXED_OVERHEAD = ClassSize.estimateBase(HStore.class, false); + + private final int blockSize; + private final Encryption.Context encryptionContext; + private final CacheConfig cacheConf; + private final HRegionFileSystem regionFileSystem; + private final CellComparator comparator; + private final BloomType bloomFilterType; + private final Supplier> compactedFilesSupplier; + private final Supplier favoredNodesSupplier; + private final ColumnFamilyDescriptor family; + private final Path familyStoreDirectoryPath; + private final RegionCoprocessorHost coprocessorHost; + + private StoreContext(Builder builder) { +this.blockSize = builder.blockSize; +this.encryptionContext = builder.encryptionContext; +this.cacheConf = builder.cacheConf; +this.regionFileSystem = builder.regionFileSystem; +this.comparator = builder.comparator; +this.bloomFilterType = builder.bloomFilterType; +this.compactedFilesSupplier = builder.compactedFilesSupplier; +this.favoredNodesSupplier = builder.favoredNodesSupplier; +this.family = builder.family; +this.familyStoreDirectoryPath = builder.familyStoreDirectoryPath; +this.coprocessorHost = builder.coprocessorHost; + } + + public int getBlockSize() { +return blockSize; + } + + public Encryption.Context getEncryptionContext() { +return encryptionContext; + } + + public CacheConfig getCacheConf() { +return cacheConf; + } + + public HRegionFileSystem getRegionFileSystem() { +return regionFileSystem; + } + + public CellComparator getComparator() { +return comparator; + } + + public BloomType getBloomFilterType() { +return bloomFilterType; + } + + public Supplier> getCompactedFilesSupplier() { +return compactedFilesSupplier; + } + + public InetSocketAddress[] getFavoredNodes() { +return favoredNodesSupplier.get(); + } + + public ColumnFamilyDescriptor getFamily() { +return family; + } + + public Path getFamilyStoreDirectoryPath() { +return familyStoreDirectoryPath; + } + + public RegionCoprocessorHost getCoprocessorHost() { +return coprocessorHost; + } + + public static Builder getBuilder() { +return new Builder(); + } + + @Override + public long heapSize() { +return FIXED_OVERHEAD; + } Review comment: @apurtell sorry for pinging again, I will merge this evening if I don't hear from you This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r554147880 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreContext.java ## @@ -0,0 +1,194 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.function.Supplier; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CellComparator; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.io.HeapSize; +import org.apache.hadoop.hbase.io.crypto.Encryption; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.util.ClassSize; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * This carries the immutable information and references on some of the meta data about the HStore. + * This meta data can be used across the HFileWriter/Readers and other HStore consumers without the + * need of passing around the complete store. + */ +@InterfaceAudience.Private +public final class StoreContext implements HeapSize { + public static final long FIXED_OVERHEAD = ClassSize.estimateBase(HStore.class, false); + + private final int blockSize; + private final Encryption.Context encryptionContext; + private final CacheConfig cacheConf; + private final HRegionFileSystem regionFileSystem; + private final CellComparator comparator; + private final BloomType bloomFilterType; + private final Supplier> compactedFilesSupplier; + private final Supplier favoredNodesSupplier; + private final ColumnFamilyDescriptor family; + private final Path familyStoreDirectoryPath; + private final RegionCoprocessorHost coprocessorHost; + + private StoreContext(Builder builder) { +this.blockSize = builder.blockSize; +this.encryptionContext = builder.encryptionContext; +this.cacheConf = builder.cacheConf; +this.regionFileSystem = builder.regionFileSystem; +this.comparator = builder.comparator; +this.bloomFilterType = builder.bloomFilterType; +this.compactedFilesSupplier = builder.compactedFilesSupplier; +this.favoredNodesSupplier = builder.favoredNodesSupplier; +this.family = builder.family; +this.familyStoreDirectoryPath = builder.familyStoreDirectoryPath; +this.coprocessorHost = builder.coprocessorHost; + } + + public int getBlockSize() { +return blockSize; + } + + public Encryption.Context getEncryptionContext() { +return encryptionContext; + } + + public CacheConfig getCacheConf() { +return cacheConf; + } + + public HRegionFileSystem getRegionFileSystem() { +return regionFileSystem; + } + + public CellComparator getComparator() { +return comparator; + } + + public BloomType getBloomFilterType() { +return bloomFilterType; + } + + public Supplier> getCompactedFilesSupplier() { +return compactedFilesSupplier; + } + + public InetSocketAddress[] getFavoredNodes() { +return favoredNodesSupplier.get(); + } + + public ColumnFamilyDescriptor getFamily() { +return family; + } + + public Path getFamilyStoreDirectoryPath() { +return familyStoreDirectoryPath; + } + + public RegionCoprocessorHost getCoprocessorHost() { +return coprocessorHost; + } + + public static Builder getBuilder() { +return new Builder(); + } + + @Override + public long heapSize() { +return FIXED_OVERHEAD; + } Review comment: @apurtell sorry for pinging again, I will merge this evening if I don't hear from you This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552986081 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreContext.java ## @@ -0,0 +1,194 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.function.Supplier; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CellComparator; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.io.HeapSize; +import org.apache.hadoop.hbase.io.crypto.Encryption; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.util.ClassSize; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * This carries the immutable information and references on some of the meta data about the HStore. + * This meta data can be used across the HFileWriter/Readers and other HStore consumers without the + * need of passing around the complete store. + */ +@InterfaceAudience.Private +public final class StoreContext implements HeapSize { + public static final long FIXED_OVERHEAD = ClassSize.estimateBase(HStore.class, false); + + private final int blockSize; + private final Encryption.Context encryptionContext; + private final CacheConfig cacheConf; + private final HRegionFileSystem regionFileSystem; + private final CellComparator comparator; + private final BloomType bloomFilterType; + private final Supplier> compactedFilesSupplier; + private final Supplier favoredNodesSupplier; + private final ColumnFamilyDescriptor family; + private final Path familyStoreDirectoryPath; + private final RegionCoprocessorHost coprocessorHost; + + private StoreContext(Builder builder) { +this.blockSize = builder.blockSize; +this.encryptionContext = builder.encryptionContext; +this.cacheConf = builder.cacheConf; +this.regionFileSystem = builder.regionFileSystem; +this.comparator = builder.comparator; +this.bloomFilterType = builder.bloomFilterType; +this.compactedFilesSupplier = builder.compactedFilesSupplier; +this.favoredNodesSupplier = builder.favoredNodesSupplier; +this.family = builder.family; +this.familyStoreDirectoryPath = builder.familyStoreDirectoryPath; +this.coprocessorHost = builder.coprocessorHost; + } + + public int getBlockSize() { +return blockSize; + } + + public Encryption.Context getEncryptionContext() { +return encryptionContext; + } + + public CacheConfig getCacheConf() { +return cacheConf; + } + + public HRegionFileSystem getRegionFileSystem() { +return regionFileSystem; + } + + public CellComparator getComparator() { +return comparator; + } + + public BloomType getBloomFilterType() { +return bloomFilterType; + } + + public Supplier> getCompactedFilesSupplier() { +return compactedFilesSupplier; + } + + public InetSocketAddress[] getFavoredNodes() { +return favoredNodesSupplier.get(); + } + + public ColumnFamilyDescriptor getFamily() { +return family; + } + + public Path getFamilyStoreDirectoryPath() { +return familyStoreDirectoryPath; + } + + public RegionCoprocessorHost getCoprocessorHost() { +return coprocessorHost; + } + + public static Builder getBuilder() { +return new Builder(); + } + + @Override + public long heapSize() { +return FIXED_OVERHEAD; + } Review comment: @apurtell ping again on the `heapSize()` and implements `HeapSize` for StoreContext class, could you have another look? I should have fixed it, but will wait for your approval/resolve in 1 day or 2 day before merging it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552887851 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -308,7 +294,7 @@ protected HStore(final HRegion region, final ColumnFamilyDescriptor family, this.compactionCheckMultiplier = DEFAULT_COMPACTCHECKER_INTERVAL_MULTIPLIER; } -this.storeEngine = createStoreEngine(this, this.conf, this.comparator); +this.storeEngine = createStoreEngine(this, this.conf, getComparator()); Review comment: changed to use `region.getCellComparator()`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552887639 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -268,34 +251,37 @@ protected HStore(final HRegion region, final ColumnFamilyDescriptor family, .addBytesMap(region.getTableDescriptor().getValues()) .addStringMap(family.getConfiguration()) .addBytesMap(family.getValues()); -this.blocksize = family.getBlocksize(); + +this.region = region; +this.storeContext = initializeStoreContext(family); + +// Assemble the store's home directory and Ensure it exists. +getRegionFileSystem().createStoreDir(family.getNameAsString()); // set block storage policy for store directory String policyName = family.getStoragePolicy(); if (null == policyName) { policyName = this.conf.get(BLOCK_STORAGE_POLICY_KEY, DEFAULT_BLOCK_STORAGE_POLICY); } -this.fs.setStoragePolicy(family.getNameAsString(), policyName.trim()); +getRegionFileSystem().setStoragePolicy(getColumnFamilyName(), policyName.trim()); Review comment: changed back to use `family` and `region.getRegionFileSystem()` directly. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552874873 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java ## @@ -46,6 +46,8 @@ int PRIORITY_USER = 1; int NO_PRIORITY = Integer.MIN_VALUE; + int getBlockSize(); Review comment: this was new change in the latest commit of this PR, mainly it's used for `HMobStore` when calling `createWriterInTmp`. but let me change it using `getStoreContext().getBlockSize()` to avoid adding a new interface. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552389039 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java ## @@ -138,6 +138,10 @@ public boolean isCompressedOrEncrypted() { return compressAlgo; } + public void setCompression(Compression.Algorithm compressAlgo) { +this.compressAlgo = compressAlgo; + } + Review comment: oops, I missed this change. and now it should have updated/removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552323137 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -246,6 +234,8 @@ private AtomicLong compactedCellsSize = new AtomicLong(); private AtomicLong majorCompactedCellsSize = new AtomicLong(); + private HStoreContext storeContext; Review comment: thanks, I should have provide a followup in the latest diff and proposed to keep with the `StoreContext` naming. marked this conversation as resolved. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552321656 ## File path: hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java ## @@ -138,6 +138,10 @@ public boolean isCompressedOrEncrypted() { return compressAlgo; } + public void setCompression(Compression.Algorithm compressAlgo) { +this.compressAlgo = compressAlgo; + } + Review comment: fixed, and reverted back to builder pattern This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552321526 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -254,48 +244,47 @@ protected HStore(final HRegion region, final ColumnFamilyDescriptor family, final Configuration confParam, boolean warmup) throws IOException { -this.fs = region.getRegionFileSystem(); - -// Assemble the store's home directory and Ensure it exists. -fs.createStoreDir(family.getNameAsString()); -this.region = region; -this.family = family; // 'conf' renamed to 'confParam' b/c we use this.conf in the constructor // CompoundConfiguration will look for keys in reverse order of addition, so we'd // add global config first, then table and cf overrides, then cf metadata. this.conf = new CompoundConfiguration() - .add(confParam) - .addBytesMap(region.getTableDescriptor().getValues()) - .addStringMap(family.getConfiguration()) - .addBytesMap(family.getValues()); -this.blocksize = family.getBlocksize(); +.add(confParam) +.addBytesMap(region.getTableDescriptor().getValues()) +.addStringMap(family.getConfiguration()) +.addBytesMap(family.getValues()); + +this.region = region; +this.storeContext = initializeStoreContext(family); + +// Assemble the store's home directory and Ensure it exists. +getRegionFileSystem().createStoreDir(getColumnFamilyName()); + +this.blocksize = getColumnFamilyDescriptor().getBlocksize(); // set block storage policy for store directory -String policyName = family.getStoragePolicy(); +String policyName = getColumnFamilyDescriptor().getStoragePolicy(); Review comment: fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552321478 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -347,6 +331,48 @@ protected HStore(final HRegion region, final ColumnFamilyDescriptor family, cacheOnWriteLogged = false; } + private StoreContext initializeStoreContext(ColumnFamilyDescriptor family) throws IOException { +return new StoreContext.Builder() +.withBloomType(family.getBloomFilterType()) +.withCacheConfig(createCacheConf(family)) +.withCellComparator(region.getCellComparator()) +.withColumnFamilyDescriptor(family) +.withCompactedFilesSupplier(this::getCompactedFiles) +.withRegionFileSystem(region.getRegionFileSystem()) +.withDefaultHFileContext(getDefaultHFileContext(family)) +.withFavoredNodesSupplier(this::getFavoredNodes) +.withFamilyStoreDirectoryPath(region.getRegionFileSystem() +.getStoreDir(family.getNameAsString())) +.withRegionCoprocessorHost(region.getCoprocessorHost()) +.build(); + } + + private InetSocketAddress[] getFavoredNodes() { +InetSocketAddress[] favoredNodes = null; +if (region.getRegionServerServices() != null) { + favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion( + region.getRegionInfo().getEncodedName()); +} +return favoredNodes; + } + + private HFileContext getDefaultHFileContext(ColumnFamilyDescriptor family) throws IOException { Review comment: the `conf` and `region` used within this function was blocking it to be static, but thinking it again, having this big defaultHFileContext within StoreFileContext is strange and the original thought was to reuse reference if we can. I will remove this `getDefaultHFileContext` and `defaultHFileContext` in `StoreContext` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552321068 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -474,33 +502,14 @@ public long getBlockingFileCount() { } /* End implementation of StoreConfigInformation */ - /** - * Returns the configured bytesPerChecksum value. - * @param conf The configuration - * @return The bytesPerChecksum that is set in the configuration - */ - public static int getBytesPerChecksum(Configuration conf) { -return conf.getInt(HConstants.BYTES_PER_CHECKSUM, - HFile.DEFAULT_BYTES_PER_CHECKSUM); - } - - /** - * Returns the configured checksum algorithm. - * @param conf The configuration - * @return The checksum algorithm that is set in the configuration - */ - public static ChecksumType getChecksumType(Configuration conf) { -String checksumName = conf.get(HConstants.CHECKSUM_TYPE_NAME); -if (checksumName == null) { - return ChecksumType.getDefaultChecksumType(); -} else { - return ChecksumType.nameToType(checksumName); -} - } @Override public ColumnFamilyDescriptor getColumnFamilyDescriptor() { -return this.family; +return this.storeContext.getFamily(); + } + + public Encryption.Context getEncryptionContext() { Review comment: `getEncryptionContext` was used by `HMobStore` such we set it to `public` , it could be `package-private`. `getStoreContext` was planned to add in a followup PR when used, tho you're right that if we have a `getStoreContext` then this `getEncryptionContext` could be removed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552320678 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreContext.java ## @@ -0,0 +1,182 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.function.Supplier; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CellComparator; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.io.HeapSize; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.io.hfile.HFileContext; +import org.apache.hadoop.hbase.util.ClassSize; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * This carries the information on some of the meta data about the HStore. This + * meta data can be used across the HFileWriter/Readers and other HStore consumers without the + * need of passing around the complete store. Review comment: thanks for sharing a list of *Info and *Context, I browsed few of them and seems Context should be still okie. so let's keep it with `StoreContext` and made this change simple, and I add the `immutable` keyword in the block of the class comment. ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -1206,53 +1218,34 @@ public StoreFileWriter createWriterInTmp(long maxKeyCount, Compression.Algorithm } } } -InetSocketAddress[] favoredNodes = null; -if (region.getRegionServerServices() != null) { - favoredNodes = region.getRegionServerServices().getFavoredNodesForRegion( - region.getRegionInfo().getEncodedName()); -} -HFileContext hFileContext = createFileContext(compression, includeMVCCReadpoint, includesTag, - cryptoContext); -Path familyTempDir = new Path(fs.getTempDir(), family.getNameAsString()); -StoreFileWriter.Builder builder = new StoreFileWriter.Builder(conf, writerCacheConf, -this.getFileSystem()) -.withOutputDir(familyTempDir) -.withBloomType(family.getBloomFilterType()) -.withMaxKeyCount(maxKeyCount) -.withFavoredNodes(favoredNodes) -.withFileContext(hFileContext) -.withShouldDropCacheBehind(shouldDropBehind) -.withCompactedFilesSupplier(this::getCompactedFiles) -.withFileStoragePolicy(fileStoragePolicy); +HFileContext hFileContext = createFileContext(compression, includeMVCCReadpoint, includesTag); +Path familyTempDir = new Path(getRegionFileSystem().getTempDir(), getColumnFamilyName()); +StoreFileWriter.Builder builder = + new StoreFileWriter.Builder(conf, writerCacheConf, getFileSystem()) +.withOutputDir(familyTempDir) +.withBloomType(storeContext.getBloomFilterType()) +.withMaxKeyCount(maxKeyCount) +.withFavoredNodes(storeContext.getFavoredNodes()) +.withFileContext(hFileContext) +.withShouldDropCacheBehind(shouldDropBehind) +.withCompactedFilesSupplier(storeContext.getCompactedFilesSupplier()) +.withFileStoragePolicy(fileStoragePolicy); return builder.build(); } private HFileContext createFileContext(Compression.Algorithm compression, - boolean includeMVCCReadpoint, boolean includesTag, Encryption.Context cryptoContext) { +boolean includeMVCCReadpoint, boolean includesTag) { if (compression == null) { compression = HFile.DEFAULT_COMPRESSION_ALGORITHM; } -HFileContext hFileContext = new HFileContextBuilder() -.withIncludesMvcc(includeMVCCReadpoint) -.withIncludesTags(includesTag) -.withCompression(compression) -.withCompressTags(family.isCompressTags()) -.withChecksumType(checksumType) -.withBytesPerCheckSum(bytesPerChecksum) -.withBlockSize(blocksize) -
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552205785 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -246,6 +234,8 @@ private AtomicLong compactedCellsSize = new AtomicLong(); private AtomicLong majorCompactedCellsSize = new AtomicLong(); + private HStoreContext storeContext; Review comment: @saintstack , do you have other comments about the scope of this StoreContext ? do we need to rename it ? or do you think we could move forward and introduce this context object with the proposed set of informative accessor/reference ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r552203329 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreUtils.java ## @@ -136,4 +140,26 @@ public static OptionalLong getMaxSequenceIdInList(Collection sfs) { return largestFile.isPresent() ? StoreUtils.getFileSplitPoint(largestFile.get(), comparator) : Optional.empty(); } + + /** + * Returns the configured checksum algorithm. + * @param conf The configuration + * @return The checksum algorithm that is set in the configuration + */ + public static ChecksumType getChecksumType(Configuration conf) { +String checksumName = conf.get(HConstants.CHECKSUM_TYPE_NAME); Review comment: @saintstack we have updated it in the last commit, do you mind to have a second look? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r551656699 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreContext.java ## @@ -0,0 +1,174 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CellComparator; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.io.hfile.HFileContext; +import org.apache.yetus.audience.InterfaceAudience; + +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.function.Supplier; + +/** + * This carries the information on some of the meta data about the HStore. This + * meta data can be used across the HFileWriter/Readers and other HStore consumers without the + * need of passing around the complete store. + */ +@InterfaceAudience.Private +public class HStoreContext { Review comment: fixed and implemented `heapSize()` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r551656590 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -2559,7 +2572,7 @@ public boolean needsCompaction() { * @return cache configuration for this Store. */ public CacheConfig getCacheConfig() { -return this.cacheConf; +return storeContext.getCacheConf(); } public static final long FIXED_OVERHEAD = ClassSize.estimateBase(HStore.class, false); Review comment: this has been change as well, can you have another look ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r551656408 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -474,33 +501,14 @@ public long getBlockingFileCount() { } /* End implementation of StoreConfigInformation */ - /** - * Returns the configured bytesPerChecksum value. - * @param conf The configuration - * @return The bytesPerChecksum that is set in the configuration - */ - public static int getBytesPerChecksum(Configuration conf) { -return conf.getInt(HConstants.BYTES_PER_CHECKSUM, - HFile.DEFAULT_BYTES_PER_CHECKSUM); - } - - /** - * Returns the configured checksum algorithm. - * @param conf The configuration - * @return The checksum algorithm that is set in the configuration - */ - public static ChecksumType getChecksumType(Configuration conf) { -String checksumName = conf.get(HConstants.CHECKSUM_TYPE_NAME); -if (checksumName == null) { - return ChecksumType.getDefaultChecksumType(); -} else { - return ChecksumType.nameToType(checksumName); -} - } @Override public ColumnFamilyDescriptor getColumnFamilyDescriptor() { -return this.family; +return this.storeContext.getFamily(); + } + + public Encryption.Context getCryptoContext() { Review comment: I have changed that in the latest update, please see if you have more comments, and I marked it as resolved first This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r549894991 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreContext.java ## @@ -0,0 +1,182 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.net.InetSocketAddress; +import java.util.Collection; +import java.util.function.Supplier; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CellComparator; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.io.HeapSize; +import org.apache.hadoop.hbase.io.hfile.CacheConfig; +import org.apache.hadoop.hbase.io.hfile.HFileContext; +import org.apache.hadoop.hbase.util.ClassSize; +import org.apache.yetus.audience.InterfaceAudience; + +/** + * This carries the information on some of the meta data about the HStore. This + * meta data can be used across the HFileWriter/Readers and other HStore consumers without the + * need of passing around the complete store. + */ +@InterfaceAudience.Private +public final class StoreContext implements HeapSize { + public static final long FIXED_OVERHEAD = ClassSize.estimateBase(HStore.class, false); + + private final HFileContext defaultFileContext; + private final CacheConfig cacheConf; + private final HRegionFileSystem regionFileSystem; + private final CellComparator comparator; + private final BloomType bloomFilterType; + private final Supplier> compactedFilesSupplier; + private final Supplier favoredNodesSupplier; + private final ColumnFamilyDescriptor family; + private final Path familyStoreDirectoryPath; + private final RegionCoprocessorHost coprocessorHost; + + private StoreContext(Builder builder) { +this.defaultFileContext = builder.defaultFileContext; +this.cacheConf = builder.cacheConf; +this.regionFileSystem = builder.regionFileSystem; +this.comparator = builder.comparator; +this.bloomFilterType = builder.bloomFilterType; +this.compactedFilesSupplier = builder.compactedFilesSupplier; +this.favoredNodesSupplier = builder.favoredNodesSupplier; +this.family = builder.family; +this.familyStoreDirectoryPath = builder.familyStoreDirectoryPath; +this.coprocessorHost = builder.coprocessorHost; + } + + public HFileContext getDefaultFileContext() { +return defaultFileContext; + } + + public CacheConfig getCacheConf() { +return cacheConf; + } + + public HRegionFileSystem getRegionFileSystem() { +return regionFileSystem; + } + + public CellComparator getComparator() { +return comparator; + } + + public BloomType getBloomFilterType() { +return bloomFilterType; + } + + public Supplier> getCompactedFilesSupplier() { +return compactedFilesSupplier; + } + + public Supplier getFavoredNodesSupplier() { +return favoredNodesSupplier; + } Review comment: For the `getCompactedFilesSupplier()`, we should keep it as using supplier because the actual values at that time will be used when `StoreFileWriter.java#appendMetadata` is being called for writing `COMPACTION_EVENT_KEY`. unless we change the builder of `StoreFileWriter`, I would propose to keep the supplier like this. for the `getFavoredNodesSupplier()`, I think you may be right that `withFavoredNodes` could be using the values directly per writer creation, will update in next change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] taklwu commented on a change in pull request #2800: HBASE-25249 Adding StoreContext
taklwu commented on a change in pull request #2800: URL: https://github.com/apache/hbase/pull/2800#discussion_r548317844 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java ## @@ -246,6 +234,8 @@ private AtomicLong compactedCellsSize = new AtomicLong(); private AtomicLong majorCompactedCellsSize = new AtomicLong(); + private HStoreContext storeContext; Review comment: IMO those informative accessor/reference and final/read-only primitives should ideally be in the context, although we're focusing on writer (`StoreFileWriter`) related reference and may have missed few of them (e.g. `scanInfo`) in this commit. let's try to clarify your suggestion 1. if you see the `StoreContext` is general to be applied on most cases, are those missing fields (e.g. `scanInfo` and final primitives) what you're trying to point out ? if so, we can revisit and filter/add more into the `StoreContext` 2. The scope of this `Context` is more related to Writer(`StoreFileWriter`)/Committer(will be added), should we rename it to `StoreWriterContext`/`StoreWriteContext` that used by those operators? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org