Hi,

I am trying to get the events from Log4J 1x into HDFS through Flume using
the Log4J  Flume appender. Created two appenders FILE and flume. It works
for the FILE appender, but with the flume appender the program just hangs
in Eclipse. Flume works properly, I am able to send messages to the avro
source using the avro client and see the messages in HDFS. But, it's not
getting integrated with Log4J 1x.

-----

I don't see any exception, except the below in the log.out.

Batch size string = null
Using Netty bootstrap options: {tcpNoDelay=true, connectTimeoutMillis=20000}
Connecting to localhost/127.0.0.1:41414
[id: 0x52a00770] OPEN

and from the Flume console

2013-10-23 14:32:32,145 (pool-5-thread-1) [INFO -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)]
[id: 0x577cf6e4, /127.0.0.1:46037 => /127.0.0.1:41414] OPEN
2013-10-23 14:32:32,148 (pool-6-thread-1) [INFO -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)]
[id: 0x577cf6e4, /127.0.0.1:46037 => /127.0.0.1:41414] BOUND: /
127.0.0.1:41414
2013-10-23 14:32:32,148 (pool-6-thread-1) [INFO -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)]
[id: 0x577cf6e4, /127.0.0.1:46037 => /127.0.0.1:41414] CONNECTED: /
127.0.0.1:46037
2013-10-23 14:32:43,086 (pool-6-thread-1) [INFO -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)]
[id: 0x577cf6e4, /127.0.0.1:46037 :> /127.0.0.1:41414] DISCONNECTED
2013-10-23 14:32:43,096 (pool-6-thread-1) [INFO -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)]
[id: 0x577cf6e4, /127.0.0.1:46037 :> /127.0.0.1:41414] UNBOUND
2013-10-23 14:32:43,096 (pool-6-thread-1) [INFO -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)]
[id: 0x577cf6e4, /127.0.0.1:46037 :> /127.0.0.1:41414] CLOSED
2013-10-23 14:32:43,097 (pool-6-thread-1) [INFO -
org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed(NettyServer.java:209)]
Connection to /127.0.0.1:46037 disconnected.

-----

If it helps I did run the program in debug mode and when it hangs, I did a
suspend and took the stack trace. Tried to look into the code, but not sure
why the program hangs with the flume appender.

Daemon Thread [Avro NettyTransceiver I/O Worker 1] (Suspended)
    Logger(Category).callAppenders(LoggingEvent) line: 205
    Logger(Category).forcedLog(String, Priority, Object, Throwable) line:
391
    Logger(Category).log(String, Priority, Object, Throwable) line: 856
    Log4jLoggerAdapter.debug(String) line: 209

NettyTransceiver$NettyClientAvroHandler.handleUpstream(ChannelHandlerContext,
ChannelEvent) line: 491

DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext,
ChannelEvent) line: 564

DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(ChannelEvent)
line: 792

NettyTransportCodec$NettyFrameDecoder(SimpleChannelUpstreamHandler).channelBound(ChannelHandlerContext,
ChannelStateEvent) line: 166

NettyTransportCodec$NettyFrameDecoder(SimpleChannelUpstreamHandler).handleUpstream(ChannelHandlerContext,
ChannelEvent) line: 98

DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline$DefaultChannelHandlerContext,
ChannelEvent) line: 564
    DefaultChannelPipeline.sendUpstream(ChannelEvent) line: 559
    Channels.fireChannelBound(Channel, SocketAddress) line: 199
    NioWorker$RegisterTask.run() line: 191
    NioWorker(AbstractNioWorker).processRegisterTaskQueue() line: 329
    NioWorker(AbstractNioWorker).run() line: 235
    NioWorker.run() line: 38
    DeadLockProofWorker$1.run() line: 42
    ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: 1145
    ThreadPoolExecutor$Worker.run() line: 615
    Thread.run() line: 744

-----

Here is the Java program

import java.io.IOException;
import java.sql.SQLException;
import org.apache.log4j.Logger;

public class log4jExample {

    static Logger log = Logger.getRootLogger();
    public static void main(String[] args) throws IOException, SQLException
{
        log.debug("Hello this is an debug message");
    }
}

-----

Here is the log4j.properties

# Define the root logger with appender file
log = /home/vm4learning/WorkSpace/BigData/Log4J-Example/log
log4j.rootLogger = DEBUG, FILE, flume

# Define the file appender
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=${log}/log.out
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.conversionPattern=%m%n

# Define the flume appender
log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender
log4j.appender.flume.Hostname = localhost
log4j.appender.flume.Port = 41414
log4j.appender.flume.UnsafeMode = false
log4j.appender.flume.layout=org.apache.log4j.PatternLayout
log4j.appender.flume.layout.ConversionPattern=%m%n

-----

Here are the dependencies

<classpathentry kind="lib" path="flume-ng-log4jappender-1.4.0.jar"/>
<classpathentry kind="lib" path="log4j-1.2.17.jar"/>
<classpathentry kind="lib" path="flume-ng-sdk-1.4.0.jar"/>
<classpathentry kind="lib" path="avro-1.7.3.jar"/>
<classpathentry kind="lib" path="netty-3.4.0.Final.jar"/>
<classpathentry kind="lib" path="avro-ipc-1.7.3.jar"/>
<classpathentry kind="lib" path="slf4j-api-1.6.1.jar"/>
<classpathentry kind="lib" path="slf4j-log4j12-1.6.1.jar"/>

-----

Here is the flume.conf content

# Tell agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = hdfs-sink1

# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory

# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 41414

# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://localhost:9000/flume/events/

agent1.sinks.hdfs-sink1.channel = ch1
agent1.sources.avro-source1.channels = ch1

How to get around this problem?

Thanks,
Praveen

Reply via email to