vince zhang created HDFS-10369:
----------------------------------
Summary: hdfsread crash when reading data reaches to 128M
Key: HDFS-10369
URL: https://issues.apache.org/jira/browse/HDFS-10369
Project: Hadoop HDFS
Issue Type: Bug
Components: fs
Reporter: vince zhang
see code below, it would crash after printf("hdfsGetDefaultBlockSize2:%d,
ret:%d\n", hdfsGetDefaultBlockSize(fs), ret);
hdfsFile read_file = hdfsOpenFile(fs, "/testpath", O_RDONLY, 0, 0, 1);
int total = hdfsAvailable(fs, read_file);
printf("Total:%d\n", total);
char* buffer = (char*)malloc(sizeof(size+1) * sizeof(char));
int ret = -1;
int len = 0;
ret = hdfsSeek(fs, read_file, 134152192);
printf("hdfsGetDefaultBlockSize1:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs),
ret);
ret = hdfsRead(fs, read_file, (void*)buffer, size);
printf("hdfsGetDefaultBlockSize2:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs),
ret);
ret = hdfsRead(fs, read_file, (void*)buffer, size);
printf("hdfsGetDefaultBlockSize3:%d, ret:%d\n", hdfsGetDefaultBlockSize(fs),
ret);
return 0;
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]