Thanks Brock

How to add the hadoop command to path. Can you please give me an example.

Thanks

On Wed, Aug 1, 2012 at 3:07 PM, Brock Noland <[email protected]> wrote:

> Hi,
>
> I think this is because you need more than just the hadoop-core.jar file.
>  If you add the "hadoop" command to your path, the flume-ng script should
> pickup these dependencies automatically.
>
> Brock
>
>
> On Tue, Jul 31, 2012 at 12:25 PM, mardan Khan <[email protected]>wrote:
>
>> HI,
>>
>> I am posting again as I am still struggling for the solution. I have
>> simple configuration file which upload the data into hadoop but give me
>> error message: Agent Failed because dependencies were not found. I am using
>> the following:
>>
>> 1). Flume1.2.0
>> 2) Hadoop-1.0.3
>> 3). Window 7
>> 4). Cygwin.
>>
>>
>> Configuration File:
>>
>> agent1.sources = source1
>> agent1.sinks = sink1
>> agent1.channels = channel1
>>
>> agent1.sources.source1.type = netcat
>> agent1.sources.source1.bind = localhost
>> agent1.sources.source1.port = 23
>>
>> agent1.sinks.sink1.type = logger
>> agent1.sinks.sink1.type = hdfs
>> agent1.sinks.sink1.hdfs.path = hdfs://localhost:9000/user/cyg_server/flume
>> agent1.channels.channel1.type = memory
>> agent1.channels.channel1.capacity = 1000
>> agent1.channels.channel1.transactionCapactiy = 100
>>
>> agent1.sources.source1.channels = channel1
>> agent1.sinks.sink1.channel = channel1
>>
>> *ERROR MESSAGE*
>>
>> mukhtaj@mukhtaj-PC ~/apache-flume
>> $ bin/flume-ng agent -n agent1 -c conf -f
>> conf/flume-conf.properties.template
>> cygpath: can't convert empty path
>> + /cygdrive/c/java/jdk1.7.0_01/bin/java -Xmx20m -cp
>> 'C:\cygwin\home\mukhtaj\apac
>> he-flume\conf;C:\cygwin\home\mukhtaj\apache-flume\lib\*'
>> -Djava.library.path= or
>> g.apache.flume.node.Application -n agent1 -f
>> conf/flume-conf.properties.template
>>
>> 2012-07-31 18:17:56,120 (main) [INFO -
>> org.apache.flume.lifecycle.LifecycleSuper
>> visor.start(LifecycleSupervisor.java:67)] Starting lifecycle supervisor 1
>> 2012-07-31 18:17:56,124 (main) [INFO -
>> org.apache.flume.node.FlumeNode.start(Flu
>> meNode.java:54)] Flume node starting - agent1
>> 2012-07-31 18:17:56,128 (lifecycleSupervisor-1-0) [INFO -
>> org.apache.flume.node.
>>
>> nodemanager.DefaultLogicalNodeManager.start(DefaultLogicalNodeManager.java:187)]
>>  Node manager starting
>> 2012-07-31 18:17:56,128 (lifecycleSupervisor-1-1) [INFO -
>> org.apache.flume.conf.
>>
>> file.AbstractFileConfigurationProvider.start(AbstractFileConfigurationProvider.j
>> ava:67)] Configuration provider starting
>> 2012-07-31 18:17:56,130 (lifecycleSupervisor-1-0) [INFO -
>> org.apache.flume.lifec
>> ycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)] Starting
>> lifecycle
>> supervisor 9
>> 2012-07-31 18:17:56,131 (lifecycleSupervisor-1-0) [DEBUG -
>> org.apache.flume.node
>>
>> .nodemanager.DefaultLogicalNodeManager.start(DefaultLogicalNodeManager.java:191)
>> ] Node manager started
>> 2012-07-31 18:17:56,132 (lifecycleSupervisor-1-1) [DEBUG -
>> org.apache.flume.conf
>>
>> .file.AbstractFileConfigurationProvider.start(AbstractFileConfigurationProvider.
>> java:86)] Configuration provider started
>> 2012-07-31 18:17:56,132 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.file
>>
>> .AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigura
>> tionProvider.java:188)] Checking file:conf\flume-conf.properties.template
>> for ch
>> anges
>> 2012-07-31 18:17:56,134 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.file.
>>
>> AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurat
>> ionProvider.java:195)] Reloading configuration
>> file:conf\flume-conf.properties.t
>> emplate
>> 2012-07-31 18:17:56,140 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.Flume
>> Configuration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)]
>> Proce
>> ssing:sink1
>> 2012-07-31 18:17:56,141 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>> eConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:992)]
>> Crea
>> ted context for sink1: hdfs.path
>> 2012-07-31 18:17:56,142 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.Flume
>> Configuration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)]
>> Proce
>> ssing:sink1
>> 2012-07-31 18:17:56,143 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.Flume
>> Configuration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)]
>> Proce
>> ssing:sink1
>> 2012-07-31 18:17:56,145 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.Flume
>> Configuration$AgentConfiguration.addProperty(FlumeConfiguration.java:902)]
>> Added
>>  sinks: sink1 Agent: agent1
>> 2012-07-31 18:17:56,146 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>> eConfiguration$AgentConfiguration.isValid(FlumeConfiguration.java:295)]
>> Starting
>>  validation of configuration for agent: agent1, initial-configuration:
>> AgentConf
>> iguration[agent1]
>> SOURCES: {source1={ parameters:{port=23, channels=channel1, type=netcat,
>> bind=lo
>> calhost} }}
>> CHANNELS: {channel1={ parameters:{transactionCapactiy=100, capacity=1000,
>> type=m
>> emory} }}
>> SINKS: {sink1={
>> parameters:{hdfs.path=hdfs://localhost:9000/user/cyg_server/flum
>> e, type=hdfs, channel=channel1} }}
>>
>> 2012-07-31 18:17:56,156 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>>
>> eConfiguration$AgentConfiguration.validateChannels(FlumeConfiguration.java:450)]
>>  Created channel channel1
>> 2012-07-31 18:17:56,174 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>> eConfiguration$AgentConfiguration.validateSinks(FlumeConfiguration.java:649)]
>> Cr
>> eating sink: sink1 using HDFS
>> 2012-07-31 18:17:56,176 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>> eConfiguration$AgentConfiguration.isValid(FlumeConfiguration.java:353)]
>> Post val
>> idation configuration for agent1
>> AgentConfiguration created without Configuration stubs for which only
>> basic synt
>> actical validation was performed[agent1]
>> SOURCES: {source1={ parameters:{port=23, channels=channel1, type=netcat,
>> bind=lo
>> calhost} }}
>> CHANNELS: {channel1={ parameters:{transactionCapactiy=100, capacity=1000,
>> type=m
>> emory} }}
>> SINKS: {sink1={
>> parameters:{hdfs.path=hdfs://localhost:9000/user/cyg_server/flum
>> e, type=hdfs, channel=channel1} }}
>>
>> 2012-07-31 18:17:56,177 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>> eConfiguration.validateConfiguration(FlumeConfiguration.java:117)]
>> Channels:chan
>> nel1
>>
>> 2012-07-31 18:17:56,177 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>> eConfiguration.validateConfiguration(FlumeConfiguration.java:118)] Sinks
>> sink1
>>
>> 2012-07-31 18:17:56,178 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.Flum
>> eConfiguration.validateConfiguration(FlumeConfiguration.java:119)]
>> Sources sourc
>> e1
>>
>> 2012-07-31 18:17:56,178 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.Flume
>> Configuration.validateConfiguration(FlumeConfiguration.java:122)]
>> Post-validatio
>> n flume configuration contains configuration  for agents: [agent1]
>> 2012-07-31 18:17:56,178 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.prope
>>
>> rties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurati
>> onProvider.java:249)] Creating channels
>> 2012-07-31 18:17:56,179 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.channel.D
>> efaultChannelFactory.create(DefaultChannelFactory.java:68)] Creating
>> instance of
>>  channel channel1 type memory
>> 2012-07-31 18:17:56,238 (conf-file-poller-0) [INFO -
>> org.apache.flume.instrument
>> ation.MonitoredCounterGroup.<init>(MonitoredCounterGroup.java:68)]
>> Monitoried co
>> unter group for type: CHANNEL, name: channel1, registered successfully.
>> 2012-07-31 18:17:56,239 (conf-file-poller-0) [INFO -
>> org.apache.flume.conf.prope
>>
>> rties.PropertiesFileConfigurationProvider.loadChannels(PropertiesFileConfigurati
>> onProvider.java:273)] created channel channel1
>> 2012-07-31 18:17:56,239 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.source.De
>> faultSourceFactory.create(DefaultSourceFactory.java:74)] Creating
>> instance of so
>> urce source1, type netcat
>> 2012-07-31 18:17:56,316 (conf-file-poller-0) [INFO -
>> org.apache.flume.sink.Defau
>> ltSinkFactory.create(DefaultSinkFactory.java:70)] Creating instance of
>> sink: sin
>> k1, type: hdfs
>> 2012-07-31 18:17:56,392 (conf-file-poller-0) [DEBUG -
>> org.apache.hadoop.conf.Con
>> figuration.<init>(Configuration.java:227)] java.io.IOException: config()
>>         at
>> org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)
>>         at
>> org.apache.hadoop.conf.Configuration.<init>(Configuration.java:214)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
>> rGroupInformation.java:184)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(Use
>> rGroupInformation.java:236)
>>         at
>> org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.j
>> ava:516)
>>         at
>> org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java
>> :238)
>>         at
>> org.apache.flume.conf.Configurables.configure(Configurables.java:41)
>>         at
>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.
>> loadSinks(PropertiesFileConfigurationProvider.java:373)
>>         at
>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.
>> load(PropertiesFileConfigurationProvider.java:223)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(A
>> bstractFileConfigurationProvider.java:123)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$3
>> 00(AbstractFileConfigurationProvider.java:38)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatc
>> herRunnable.run(AbstractFileConfigurationProvider.java:202)
>>         at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47
>> 1)
>>         at
>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java
>> :351)
>>         at
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>> access$301(ScheduledThreadPoolExecutor.java:178)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>> run(ScheduledThreadPoolExecutor.java:293)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
>> java:1110)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
>> .java:603)
>>         at java.lang.Thread.run(Thread.java:722)
>>
>> 2012-07-31 18:17:56,464 (conf-file-poller-0) [DEBUG -
>> org.apache.hadoop.security
>> .Groups.getUserToGroupsMappingService(Groups.java:139)]  Creating new
>> Groups obj
>> ect
>> 2012-07-31 18:17:56,508 (conf-file-poller-0) [DEBUG -
>> org.apache.hadoop.security
>> .Groups.<init>(Groups.java:59)] Group mapping impl=
>> org.apache.hadoop.security.Sh
>> ellBasedUnixGroupsMapping; cacheTimeout=300000
>> 2012-07-31 18:17:56,551 (conf-file-poller-0) [DEBUG -
>> org.apache.hadoop.conf.Con
>> figuration.<init>(Configuration.java:227)] java.io.IOException: config()
>>         at
>> org.apache.hadoop.conf.Configuration.<init>(Configuration.java:227)
>>         at
>> org.apache.hadoop.conf.Configuration.<init>(Configuration.java:214)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
>> rGroupInformation.java:184)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(Use
>> rGroupInformation.java:236)
>>         at
>> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:79
>> )
>>         at
>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
>> nformation.java:209)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
>> rGroupInformation.java:184)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(Use
>> rGroupInformation.java:236)
>>         at
>> org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.j
>> ava:516)
>>         at
>> org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java
>> :238)
>>         at
>> org.apache.flume.conf.Configurables.configure(Configurables.java:41)
>>         at
>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.
>> loadSinks(PropertiesFileConfigurationProvider.java:373)
>>         at
>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.
>> load(PropertiesFileConfigurationProvider.java:223)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(A
>> bstractFileConfigurationProvider.java:123)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$3
>> 00(AbstractFileConfigurationProvider.java:38)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatc
>> herRunnable.run(AbstractFileConfigurationProvider.java:202)
>>         at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47
>> 1)
>>         at
>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java
>> :351)
>>         at
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>> access$301(ScheduledThreadPoolExecutor.java:178)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>> run(ScheduledThreadPoolExecutor.java:293)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
>> java:1110)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
>> .java:603)
>>         at java.lang.Thread.run(Thread.java:722)
>>
>> 2012-07-31 18:17:56,592 (conf-file-poller-0)
>> Runnable.run(AbstractFileConfigura
>> tionProvider.java:207)] Failed to start agent because [ERROR -
>> org.apache.flume.conf.file
>> .AbstractFileConfigurationProvider$FileWatchuse dependencies were not foun
>> d in classpath. Error follows.
>> java.lang.NoClassDefFoundError:
>> org/apache/commons/configuration/Configuration
>>         at
>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMet
>> ricsSystem.java:37)
>>         at
>> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultM
>> etricsSystem.java:34)
>>         at
>> org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentati
>> on.java:51)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
>> nformation.java:216)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
>> rGroupInformation.java:184)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(Use
>> rGroupInformation.java:236)
>>         at
>> org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:79
>> )
>>         at
>> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
>> nformation.java:209)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
>> rGroupInformation.java:184)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(Use
>> rGroupInformation.java:236)
>>         at
>> org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.j
>> ava:516)
>>         at
>> org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java
>> :238)
>>         at
>> org.apache.flume.conf.Configurables.configure(Configurables.java:41)
>>         at
>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.
>> loadSinks(PropertiesFileConfigurationProvider.java:373)
>>         at
>> org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.
>> load(PropertiesFileConfigurationProvider.java:223)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.doLoad(A
>> bstractFileConfigurationProvider.java:123)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider.access$3
>> 00(AbstractFileConfigurationProvider.java:38)
>>         at
>> org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatc
>> herRunnable.run(AbstractFileConfigurationProvider.java:202)
>>         at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47
>> 1)
>>         at
>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java
>> :351)
>>         at
>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>> access$301(ScheduledThreadPoolExecutor.java:178)
>>         at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
>> run(ScheduledThreadPoolExecutor.java:293)
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
>> java:1110)
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
>> .java:603)
>>         at java.lang.Thread.run(Thread.java:722)
>> Caused by: java.lang.ClassNotFoundException:
>> org.apache.commons.configuration.Co
>> nfiguration
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         ... 26 more
>> 2012-07-31 18:18:26,601 (conf-file-poller-0) [DEBUG -
>> org.apache.flume.conf.file
>>
>> .AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigura
>> tionProvider.java:188)] Checking file:conf\flume-conf.properties.template
>> for ch
>> anges
>>
>>
>> Please give me any solution if someone have. I dont understand what is
>> mean by dependencies were not found.
>>
>> Thanks
>>
>>
>>
>
>
> --
> Apache MRUnit - Unit testing MapReduce -
> http://incubator.apache.org/mrunit/
>

Reply via email to