[
https://issues.apache.org/jira/browse/HDFS-4467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649587#comment-13649587
]
suo tong commented on HDFS-4467:
--------------------------------
hi,Shubhangi Garg。 what is the situation of segmentation fault happened? In my
program, if i use a byte with 1M stack space, it will cause the segmentation
fault. Below is the code:
#include "hdfs.h"
- int main(int argc, char **argv) {
| char b[1024000] = {0};
| hdfsFS fs = hdfsConnect("default", 0);
|- if (!fs) {
|| fprintf(stderr, "Oops! Failed to connect to hdfs!\n");
|| exit(-1);
|| }
|
| hdfsDisconnect(fs);
| return 0;
| }
the core file shows the core stack in JNI_CreateJavaVM
(gdb) where
#0 0x000000302af3f53e in vfprintf () from /lib64/tls/libc.so.6
#1 0x000000302af61f54 in vsnprintf () from /lib64/tls/libc.so.6
#2 0x000000302af48001 in snprintf () from /lib64/tls/libc.so.6
#3 0x0000002a95c6351b in os::dll_build_name () from
..//../java6/jre/lib/amd64/server/libjvm.so
#4 0x0000002a958caae2 in ClassLoader::load_zip_library () from
..//../java6/jre/lib/amd64/server/libjvm.so
#5 0x0000002a958cbe37 in ClassLoader::initialize () from
..//../java6/jre/lib/amd64/server/libjvm.so
#6 0x0000002a958cbfe9 in classLoader_init () from
..//../java6/jre/lib/amd64/server/libjvm.so
#7 0x0000002a95a03ef7 in init_globals () from
..//../java6/jre/lib/amd64/server/libjvm.so
#8 0x0000002a95d748b4 in Threads::create_vm () from
..//../java6/jre/lib/amd64/server/libjvm.so
#9 0x0000002a95a72180 in JNI_CreateJavaVM () from
..//../java6/jre/lib/amd64/server/libjvm.so
#10 0x0000002a9616b742 in getJNIEnv () at hdfsJniHelper.c:503
#11 0x0000002a96164fd7 in hdfsConnectAsUser (host=0x400900 "03" <Address
0x400906 out of bounds>, port=0, user=0x0, password=0x0) at hdfs.c:201
#12 0x00000000004007bf in main ()
Does any one meet this problem? I guess this may be a bug of JDK, i have no
idea with it
> Segmentation fault in libhdfs while connecting to HDFS, in an application
> populating Hive Tables
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-4467
> URL: https://issues.apache.org/jira/browse/HDFS-4467
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: libhdfs
> Affects Versions: 1.0.4
> Environment: Ubuntu 12.04 (32 bit), application in C++, hadoop 1.0.4
> Reporter: Shubhangi Garg
>
> Connecting to HDFS using the libhdfs compiled library gives a segmentation
> vault and memory leaks; easily verifiable by valgrind.
> Even a simple application program given below has memory leaks:
> #include "hdfs.h"
> #include <iostream>
> int main(int argc, char **argv) {
> hdfsFS fs = hdfsConnect("localhost", 9000);
> const char* writePath = "/tmp/testfile.txt";
> hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0,
> 0);
> if(!writeFile) {
> fprintf(stderr, "Failed to open %s for writing!\n", writePath);
> exit(-1);
> }
> char* buffer = "Hello, World!";
> tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
> strlen(buffer)+1);
> if (hdfsFlush(fs, writeFile)) {
> fprintf(stderr, "Failed to 'flush' %s\n", writePath);
> exit(-1);
> }
> hdfsCloseFile(fs, writeFile);
> }
> shell>valgrind --leak-check=full ./sample
> ==12773== LEAK SUMMARY:
> ==12773== definitely lost: 7,893 bytes in 21 blocks
> ==12773== indirectly lost: 4,460 bytes in 23 blocks
> ==12773== possibly lost: 119,833 bytes in 121 blocks
> ==12773== still reachable: 1,349,514 bytes in 8,953 blocks
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira