could anybody help me ?

On Mon, Jun 16, 2014 at 10:27 AM, kishore alajangi <
[email protected]> wrote:

> Instead just mentioning hdsfs.path = /flume/messages/, do i need to
> mention something else?
>
>
> On Mon, Jun 16, 2014 at 10:25 AM, kishore alajangi <
> [email protected]> wrote:
>
>> I created the /flume/messages directories, but still nothing is written
>> with flume in those directories. please help me.
>>
>>
>> On Mon, Jun 16, 2014 at 10:15 AM, kishore alajangi <
>> [email protected]> wrote:
>>
>>> Do I need to create the /flume/messages/ directories?
>>>
>>>
>>>
>>> On Mon, Jun 16, 2014 at 10:14 AM, kishore alajangi <
>>> [email protected]> wrote:
>>>
>>>> checked, nothing is written in hdfs.
>>>>
>>>>
>>>> On Mon, Jun 16, 2014 at 10:10 AM, Sharninder <[email protected]>
>>>> wrote:
>>>>
>>>>> That just means source has done its work and is waiting for more data
>>>>> to read. Did you check hdfs to see if all data has been written?
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Jun 16, 2014 at 11:34 AM, kishore alajangi <
>>>>> [email protected]> wrote:
>>>>>
>>>>>> Hi Mohit and sharminder,
>>>>>>
>>>>>> Thanks for reply, after I called with -n tier, source is not
>>>>>> directory error came, I changed the source to /tmp/ and hdfs.path to
>>>>>> /flume/messages/ in config file, and run the command, the INFO i am 
>>>>>> getting
>>>>>> now is "spooling directory source runner has shutdown"
>>>>>> what could be the problem, please help me.
>>>>>>
>>>>>>
>>>>>> On Sun, Jun 15, 2014 at 10:21 PM, Mohit Durgapal <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Replace -n agent with -n tier1
>>>>>>>
>>>>>>>
>>>>>>> On Sunday, June 15, 2014, kishore alajangi <
>>>>>>> [email protected]> wrote:
>>>>>>>
>>>>>>>> Dear Sharminder,
>>>>>>>>
>>>>>>>> Thanks for your reply, yes I am playing with flume, as you
>>>>>>>> suggested i am using spool directory source, My configuration file 
>>>>>>>> looks
>>>>>>>> like
>>>>>>>>
>>>>>>>> *tier1.sources  = source1
>>>>>>>> tier1.channels = channel1
>>>>>>>> tier1.sinks    = sink1
>>>>>>>>
>>>>>>>> tier1.sources.source1.type     = spooldir
>>>>>>>> tier1.sources.source1.spoolDir = /var/log/messages
>>>>>>>> tier1.sources.source1.channels = channel1
>>>>>>>> tier1.channels.channel1.type   = memory
>>>>>>>>
>>>>>>>> tier1.sinks.sink1.type         = hdfs
>>>>>>>> tier1.sinks.sink1.hdfs.path = hdfs://localhost:8020/flume/messages
>>>>>>>> tier1.sinks.sink1.hdfs.fileType = SequenceFile
>>>>>>>> tier1.sinks.sink1.hdfs.filePrefix = data
>>>>>>>> tier1.sinks.sink1.hdfs.fileSuffix = .seq
>>>>>>>>
>>>>>>>> # Roll based on the block size only
>>>>>>>> tier1.sinks.sink1.hdfs.rollCount=0
>>>>>>>> tier1.sinks.sink1.hdfs.rollInterval=0
>>>>>>>> tier1.sinks.sink1.hdfs.rollSize = 120000000
>>>>>>>> # seconds to wait before closing the file.
>>>>>>>> tier1.sinks.sink1.hdfs.idleTimeout = 60
>>>>>>>> tier1.sinks.sink1.channel      = channel1
>>>>>>>>
>>>>>>>> tier1.channels.channel1.capacity = 100000*
>>>>>>>> tier1.sources.source1.deserializer.maxLineLength = 32768
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> the command I used is
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ./flume-ng agent --conf ./conf/ -f bin/example.conf 
>>>>>>>> -Dflume.root.logger=DEBUG,console -n agent
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> it gives warn after created sources, channels, sinks for tier1 agent is
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> no configuration found for this host:agent
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> any help?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Jun 15, 2014 at 11:18 AM, Sharninder <[email protected]>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> I want to copy my local data to hdfs using flume in a single
>>>>>>>>>> machine which isrunning hadoop, How can I do that, please help me.
>>>>>>>>>>
>>>>>>>>>> What is this "local data" ?
>>>>>>>>>
>>>>>>>>> If it's just files, why not use the hadoop fs copy command
>>>>>>>>> instead? If you want to play around with flume, take a look at the 
>>>>>>>>> spool
>>>>>>>>> directory source or the exec source and you should be able to put 
>>>>>>>>> something
>>>>>>>>> together that'll push data through flume to hadoop.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Sharninder
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Thanks,
>>>>>>>> Kishore.
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Thanks,
>>>>>> Kishore.
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Kishore.
>>>>
>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Kishore.
>>>
>>
>>
>>
>> --
>> Thanks,
>> Kishore.
>>
>
>
>
> --
> Thanks,
> Kishore.
>



-- 
Thanks,
Kishore.

Reply via email to