http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_cdh5_install.html

Cloudera Packages:
hadoop-hdfs-2.5.0+cdh5.3.1+791-1.cdh5.3.1.p0.17.el6.x86_64
hadoop-libhdfs-2.5.0+cdh5.3.1+791-1.cdh5.3.1.p0.17.el6.x86_64
hadoop-libhdfs-devel-2.5.0+cdh5.3.1+791-1.cdh5.3.1.p0.17.el6.x86_64


Java Package:
jdk-1.8.0_11-fcs.x86_64


JAVA_INCLUDES="-I/usr/java/jdk1.8.0_11/include
-I/usr/java/jdk1.8.0_11/include/linux"
JAVA_LIBS="-L//usr/java/jdk1.8.0_11/jre/lib/amd64
-L//usr/java/jdk1.8.0_11/jre/lib/amd64/server -ljava
-ljvm -lverify"

export JAVA_INCLUDES
export JAVA_LIBS


http://www.rsyslog.com/doc/omhdfs.html
https://sskaje.me/2014/08/build-omhdfs-rsyslog

  CC       omhdfs_la-omhdfs.lo
omhdfs.c: In function 'fileWrite':
omhdfs.c:315: warning: format '%u' expects type 'unsigned int', but
argument 2 has type 'size_t'
omhdfs.c: In function 'addData':
omhdfs.c:375: warning: passing argument 3 of 'fileWrite' from incompatible
pointer type
omhdfs.c:295: note: expected 'size_t *' but argument is of type 'unsigned
int *'
omhdfs.c:377: warning: passing argument 3 of 'fileWrite' from incompatible
pointer type
omhdfs.c:295: note: expected 'size_t *' but argument is of type 'unsigned
int *'
omhdfs.c: In function 'endTransaction':
omhdfs.c:451: warning: passing argument 3 of 'fileWrite' from incompatible
pointer type
omhdfs.c:295: note: expected 'size_t *' but argument is of type 'unsigned
int *'


Please help validate fix for above error when changed size_t to unsigned in
source file as shown below:

plugins/omhdfs/omhdfs.c

/* Note: lenWrite is reset to zero on successful write! */
static inline rsRetVal
fileWrite(file_t *pFile, uchar *buf, unsigned *lenWrite)
{

Compile appears to complete successful with the following configure, make,
make install:

CFLAGS="-g -I/usr/include" LDFLAGS="-L/usr/lib64" ./configure
--enabled-omhdfs

...plus a bunch of modules were included, left out the sample above being
out of scope for this hdfs topic.


Applying the omhdfs fix above and same compile flags are used with
autogen.sh to build contrib/omhttpfs module for alternate httpfs-hdfs
output plugin from sskaje:
https://github.com/sskaje/rsyslog/tree/omhttpfs/contrib/omhttpfs
https://sskaje.me/2014/12/omhttpfs-rsyslog-hdfs-output-plugin

CFLAGS="-g -I/usr/include" LDFLAGS="-L/usr/lib64" ./autogen.sh
--enable-shared --enabled-omhdfs --enable-omhttpfs


It appears the module requires the following libraries for loading on
rsyslog 8.7 (RHEL6.5 rpms):

HDFS Lib:
/usr/lib64/libhdfs.so.0.0.0

JAVA Libs:
/usr/java/jdk1.8.0_11/jre/lib/amd64/libjava.so
/usr/java/jdk1.8.0_11/jre/lib/amd64/libverify.so
/usr/java/jdk1.8.0_11/jre/lib/amd64/server/libjvm.so

Successfully built Rsyslog Libs without error, and started rsyslog service
with generic config to test loading modules:
omhdfs.so
omhttpfs.so


Any feedback is appreciated.   Thanks!


On Fri, Feb 13, 2015 at 10:38 AM, Kendall Green <[email protected]>
wrote:

> Can anyone speak to the state of Rsyslog HDFS output support.
>
> The omhdfs plugin does not appear to support latest version Rsyslog, is
> this correct?
>
> The github fork by sskaje/rsyslog omhttpfs/contrib appears to be another
> solution. Will this module be merged or however become official if the
> omhdfs module is no longer expected to work, without unreasonable effort.
>
> I'm not certain the process for building either of the modules for latest
> Rsyslog v8.7. Can anyone elaborate on success for this?
>
> Does anyone have some directions for building support for HDFS?
> Searching for instructions novice, such that, defines if hadoop needs to
> be installed, libraries present, or what minimum exists for packaging the
> plugin.
>
> Thank you for any feedback or reference to other articles on this topic.
>
> -Kendall Green
>
>
_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to