Hi Gwen,

Thanks for the reply


a1.sources.r1.destinationName = BUSINESS_DATA
In above configuration “BUSINESS_DATA” is name of a queue?

From: Gwen Shapira [mailto:[email protected]]
Sent: Thursday, May 07, 2015 2:57 PM
To: [email protected]
Subject: Re: Getting data from IBM MQ to Hadoop

Hi Chhaya,

First, it looks like one agent should be enough. Don't run agents on the Hadoop 
cluster itself (i.e not on data nodes). You can give it its own machine, share 
it with other "edge node" services (like Hue) or install it on MQ machine (if 
the machine is not too busy).

Second, destination should have probably been named "source", i.e. thats the 
queue or topic that contains the data in JMS.

There is a nice example in the docs:

a1.sources = r1

a1.channels = c1

a1.sources.r1.type = jms

a1.sources.r1.channels = c1

a1.sources.r1.initialContextFactory = 
org.apache.activemq.jndi.ActiveMQInitialContextFactory

a1.sources.r1.connectionFactory = GenericConnectionFactory

a1.sources.r1.providerURL = tcp://mqserver:61616

a1.sources.r1.destinationName = BUSINESS_DATA

a1.sources.r1.destinationType = QUEUE

On Thu, May 7, 2015 at 1:50 AM, Vishwakarma, Chhaya 
<[email protected]<mailto:[email protected]>> wrote:
Hi All,

I want to read data from IBM MQ and put it  into HDFs.

Looked into JMS source of flume, seems it can connect to IBM MQ, but I’m not 
understanding what does “destinationType” and “destinationName” mean in the 
list of required properties. Can someone please explain?

Also, how I should be configuring my flume agents

flumeAgent1(runs on the machine same as MQ) reads MQ data ------> 
flumeAgent2(Runs on Hadoop cluster) writes into Hdfs
OR only one agent is enough on Hadoop cluster

Can someone help me in understanding how MQs can be integrated with flume

Thanks,
Chhaya


Reply via email to