[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file
[ https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17182658#comment-17182658 ] Hemanth Boyina commented on HDFS-15445: --- thanks for reporting the issue [~igloo1986] , it is a more over a hadoop common issue , hence moving to Hadoop Common Module Can you provide a proper patch , seems your patch has unrelated changes > ZStandardCodec compression mail fail(generic error) when encounter specific > file > > > Key: HDFS-15445 > URL: https://issues.apache.org/jira/browse/HDFS-15445 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.5 > Environment: zstd 1.3.3 > hadoop 2.6.5 > > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > @@ -62,10 +62,8 @@ > @BeforeClass > public static void beforeClass() throws Exception { > CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64); > - uncompressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt").toURI()); > - compressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt.zst").toURI()); > + uncompressedFile = new File("/tmp/badcase.data"); > + compressedFile = new File("/tmp/badcase.data.zst"); >Reporter: Igloo >Priority: Blocker > Attachments: HDFS-15445.patch, badcase.data, > image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, > image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png > > > *Problem:* > In our production environment, we put file in hdfs with zstd compressor, > recently, we find that a specific file may leads to zstandard compressor > failures. > And we can reproduce the issue with specific file(attached file: badcase.data) > !image-2020-06-30-11-51-18-026.png|width=1031,height=230! > > *Analysis*: > ZStandarCompressor use buffersize( From zstd recommended compress out buffer > size) for both inBufferSize and outBufferSize > !image-2020-06-30-11-35-46-859.png|width=1027,height=387! > but zstd indeed provides two separately recommending inputBufferSize and > outputBufferSize > !image-2020-06-30-11-39-17-861.png! > > *Workaround* > One workaround, using recommended in/out buffer size provided by zstd lib > can avoid the problem, but we don't know why. > zstd recommended input buffer size: 1301072 (128 * 1024) > zstd recommended ouput buffer size: 131591 > !image-2020-06-30-11-42-44-585.png|width=1023,height=196! > > > > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file
[ https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148705#comment-17148705 ] Hadoop QA commented on HDFS-15445: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 30s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 37s{color} | {color:red} root generated 22 new + 140 unchanged - 22 fixed = 162 total (was 162) {color} | | {color:green}+1{color} | {color:green} golang {color} | {color:green} 21m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 28s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 56s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}125m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29468/artifact/out/Dockerfile | | JIRA Issue | HDFS-15445 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006750/HDFS-15445.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc golang | | uname | Linux 070e3fbf97ed 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh
[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file
[ https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148387#comment-17148387 ] Hadoop QA commented on HDFS-15445: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 10s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 27s{color} | {color:red} root generated 20 new + 142 unchanged - 20 fixed = 162 total (was 162) {color} | | {color:green}+1{color} | {color:green} golang {color} | {color:green} 17m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 27s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 38s{color} | {color:red} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestRaceWhenRelogin | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HDFS-Build/29466/artifact/out/Dockerfile | | JIRA Issue | HDFS-15445 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13006720/15445.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc golang | | uname | Linux 46a2675529c8 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64
[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file
[ https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148329#comment-17148329 ] Igloo commented on HDFS-15445: -- the issue may leads to hbase regionserver crashes, if hbase uses > ZStandardCodec compression mail fail(generic error) when encounter specific > file > > > Key: HDFS-15445 > URL: https://issues.apache.org/jira/browse/HDFS-15445 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.5 > Environment: zstd 1.3.3 > hadoop 2.6.5 > > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > @@ -62,10 +62,8 @@ > @BeforeClass > public static void beforeClass() throws Exception { > CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64); > - uncompressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt").toURI()); > - compressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt.zst").toURI()); > + uncompressedFile = new File("/tmp/badcase.data"); > + compressedFile = new File("/tmp/badcase.data.zst"); >Reporter: Igloo >Priority: Blocker > Attachments: 15445.patch, badcase.data, > image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, > image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png > > > *Problem:* > In our production environment, we put file in hdfs with zstd compressor, > recently, we find that a specific file may leads to zstandard compressor > failures. > And we can reproduce the issue with specific file(attached file: badcase.data) > !image-2020-06-30-11-51-18-026.png|width=699,height=156! > > *Analysis*: > ZStandarCompressor use buffersize( From zstd recommended compress out buffer > size) for both inBufferSize and outBufferSize > !image-2020-06-30-11-35-46-859.png|width=475,height=179! > but zstd indeed provides two separately recommending inputBufferSize and > outputBufferSize > !image-2020-06-30-11-39-17-861.png! > > *Workaround* > One workaround, using recommended in/out buffer size provided by zstd lib > can avoid the problem, but we don't know why. > zstd recommended input buffer size: 1301072 (128 * 1024) > zstd recommended ouput buffer size: 131591 > !image-2020-06-30-11-42-44-585.png! > > > > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file
[ https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17148330#comment-17148330 ] Igloo commented on HDFS-15445: -- the issue may leads to hbase regionserver crashes, if hbase uses > ZStandardCodec compression mail fail(generic error) when encounter specific > file > > > Key: HDFS-15445 > URL: https://issues.apache.org/jira/browse/HDFS-15445 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.5 > Environment: zstd 1.3.3 > hadoop 2.6.5 > > --- > a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > +++ > b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java > @@ -62,10 +62,8 @@ > @BeforeClass > public static void beforeClass() throws Exception { > CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64); > - uncompressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt").toURI()); > - compressedFile = new File(TestZStandardCompressorDecompressor.class > - .getResource("/zstd/test_file.txt.zst").toURI()); > + uncompressedFile = new File("/tmp/badcase.data"); > + compressedFile = new File("/tmp/badcase.data.zst"); >Reporter: Igloo >Priority: Blocker > Attachments: 15445.patch, badcase.data, > image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, > image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png > > > *Problem:* > In our production environment, we put file in hdfs with zstd compressor, > recently, we find that a specific file may leads to zstandard compressor > failures. > And we can reproduce the issue with specific file(attached file: badcase.data) > !image-2020-06-30-11-51-18-026.png|width=699,height=156! > > *Analysis*: > ZStandarCompressor use buffersize( From zstd recommended compress out buffer > size) for both inBufferSize and outBufferSize > !image-2020-06-30-11-35-46-859.png|width=475,height=179! > but zstd indeed provides two separately recommending inputBufferSize and > outputBufferSize > !image-2020-06-30-11-39-17-861.png! > > *Workaround* > One workaround, using recommended in/out buffer size provided by zstd lib > can avoid the problem, but we don't know why. > zstd recommended input buffer size: 1301072 (128 * 1024) > zstd recommended ouput buffer size: 131591 > !image-2020-06-30-11-42-44-585.png! > > > > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org