Hi ct -

Thanks very much for your bug report and suggestions.

Right now, we're not running automated testing of the HDFS
filesystem plugin, and so it's a little bit harder for
us to work with this code. I think we're hoping to start
such automated testing in the not-too-distant future.

Anyway, in this case, is it possible to prepare a
pull request (against https://github.com/chapel-lang/chapel )
that covers these changes? They look like good bug fixes.
If you can test that the HDFS module is functioning
after the changes in a PR, that will offer us some assurance
that the PR is good.

Regarding the commented-out sys_getaddrinfo, it is not
obvious to me how to fix that problem. It was commented out
because GCC gives a warning like this when using -static:

sys.c:(.text+0x1325): warning: Using 'getaddrinfo' in statically linked
applications requires at runtime the shared libraries from the glibc
version used for linking


It's not clear to me whether or not that warning is
really helpful (since I'd expect -static to be "mostly static"
anyway - do we really expect that the binary won't depend
on any .so files in the system used to build it?).
And in the testing configuration in which we find this
error is printed, the user isn't actually explicitly requesting
a static executable...

We could move the getaddrinfo support into a different module
and re-compile the .c for it when the Chapel program
is built. That would at least make the warning only
appear for Chapel programs that used the getaddrinfo module
(instead of for all programs).

So, anyway, feel free to discuss how to resolve the getaddrinfo
problem - but for now, please leave it out of your bug-fix pull
request.

Thanks very much,

-michael


On 9/24/15, 10:09 PM, "ct clmsn" <[email protected]> wrote:

>Chapel team,
>
>I've been working with the HDFS and Sys modules in Chapel. Think I've
>found a couple issues that probably need a second set of eyes to review.
>
>$CHPL_HOME/runtime/src/qio/auxFilesys/hdfs/qio_plugin_hdfs.c, line 299
>
>(current line of code)
>- // We cannot seek unless we are in read mode! (HDFS restriction)
>- if (to_hdfs_file(fl)->file->type != INPUT)
>
>
>(a possible fix)
>+ // We cannot seek unless we are in read mode! (HDFS restriction)
>+ if (hdfsFileIsOpenForRead(to_hdfs_file(fl)->file))
>
>The Apache hadoop team moved around type definitions in libhdfs/hdfs.h
>and to_hdfs_file(fl)->file->type isn't accessible at the moment. The
>possible fix listed could resolve this issue.
>
>
>$CHPL_HOME/modules/standard/IO.chpl, line 2246
>
>(current line of code)
>- new_str = new_str.substring(hostidx_end+1..new_str.length);
>
>(possible fix)
>+ new_str = new_str.substring(hostidx_end..new_str.length);
>
>
>
>The +1 is cutting off the initial "/" in an hdfs path string and is
>causing the call to qio_file_open_access_usr on line 2278 to break.
>
>
>$CHPL_HOME/modules/standard/HDFS.chpl, line 504 - there is a missing a
>.c_str() call on a chapel string being passed to a function expecting
>c_strings (there might be one more of these small things in that file not
>100% sure)
>
>
>$CHPL_HOME/modules/standard/Sys.chpl, line 245 is a duplicate of line 244.
>
>
>$CHPL_HOME/modules/standard/Sys.chpl, line 264 can be called, but I get a
>runtime crash because the underlying C implementation that starts on
>$CHPL_HOME/runtime/src/qio/sys.c, line 1407 is commented out (an
>undefined error).
>
>
>Apologies for the large info dump! Love the language and the runtime's
>layout (along with the design/development comments) made the debugging
>process very intuitive. Having no experience looking at the runtime code,
>I as able to sort out these issues really
> quickly (less than a day)!
>
>Cheers!
>
>
>
>--
>
>ct
>


------------------------------------------------------------------------------
_______________________________________________
Chapel-bugs mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/chapel-bugs

Reply via email to