Do you mean something like this:
host2.sinks.avroSink.batch-size = 1000
host2.sinks.avroSink.runner.type = polling
host2.sinks.avroSink.runner.polling.interval = 1

Regards,
Som Shekhar

On Thu, Apr 26, 2012 at 2:07 PM, alo alt <wget.n...@googlemail.com> wrote:

> Oh, 60k Events are indeed a lot to much for 20mb max heap size. If I
> calculate right you need around 600MB Heap. A memory channel with 20mb heap
> can handle around 2.3K events / second, an JDBC around 326. Did you batch
> the lines or read them sequential? Batching could help you too (catch 1k
> events and send out, next 1k and so on)
>
> cheers
> - Alex
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Apr 26, 2012, at 10:28 AM, shekhar sharma wrote:
>
> > Hello Alex,
> > One more bit of information is that out of 60000 records, previously i
> could process all the 60000 records, but not Esper is logging only 5000
> records..
> >
> >
> > Regards,
> > Som
> >
> > On Thu, Apr 26, 2012 at 1:54 PM, shekhar sharma <shekhar2...@gmail.com>
> wrote:
> > HI Alex
> > Everything is working fine, Esper is able to detect the events and can
> write them to separate log file.
> > Well i have written a python script which generates the log file and i
> am creating almost 60000 records, and if you can see that my exec source
> tail command is working on that log file...
> > After the events are detected and processed by Esper, and this error
> does not comes instantenously, after some time it is thrown...
> >
> > Could it me because of -Xmx20 options ?..because the system is my
> virtual linux box with 1GB RAM and 20GB HD..
> > Can the size of the event can also effect? the reason why i am saying
> this is:
> >
> > Event read by exec source is of SimpleEvent which has 2 member variable
> MAP for headers and byte[] for body and in Esper if you want to employ
> filter "IN" or "LIKE" then it requires string..so what i did, i have
> created another member variable of type string to convert the byte array to
> string. SO while sending this event to the channel, is this bloating up the
> channel capacity?
> >
> > Regards,
> > Som
> >
> >
> >
> > On Thu, Apr 26, 2012 at 1:41 PM, alo alt <wget.n...@googlemail.com>
> wrote:
> > Thanks,
> >
> > org.apache.flume.sink.esper.EsperSink is availabe in the CLASSPATH and
> flume can load them? Or did you write the sink?
> >
> > best,
> >  Alex
> >
> >
> > --
> > Alexander Lorenz
> > http://mapredit.blogspot.com
> >
> > On Apr 26, 2012, at 10:03 AM, shekhar sharma wrote:
> >
> > > Hello Alex,
> > > My configuration file is as follows:
> > >
> > > host1.properties file
> > >
> > > host1.sources = avroSource
> > > host1.channels = memoryChannel
> > > host1.sinks = esper
> > >
> > > #avroSource configuration
> > >
> > > host1.sources.avroSource.type = avro
> > > host1.sources.avroSource.bind = localhost
> > > host1.sources.avroSource.port = 41414
> > > host1.sources.avroSource.channels = memoryChannel
> > >
> > > #Channels
> > >
> > > host1.channels.memoryChannel.type = memory
> > >
> > >
> > > #Sinks
> > > host1.sinks.esper.type = org.apache.flume.sink.esper.EsperSink
> > > host1.sinks.esper.channel = memoryChannel
> > >
> > > host2.properties file:
> > >
> > > host2.sources = execSource
> > > host2.channels = memoryChannel
> > > host2.sinks = avroSink
> > >
> > > #execSource  configuration
> > >
> > > host2.sources.execSource.type = exec
> > > host2.sources.execSource.command = /usr/bin/tail -F
> /home/dev/LogScripts/json.csv
> > > host2.sources.execSource.channels = memoryChannel
> > >
> > > #Channels
> > >
> > > host2.channels.memoryChannel.type = memory
> > >
> > >
> > > #Sinks
> > > host2.sinks.avroSink.type = avro
> > > host2.sinks.avroSink.hostname=localhost
> > > host2.sinks.avroSink.port=41414
> > > host2.sinks.avroSink.batch-size = 10
> > > host2.sinks.avroSink.runner.type = polling
> > > host2.sinks.avroSink.runner.polling.interval = 1
> > > host2.sinks.avroSink.channel = memoryChannel
> > >
> > > Regards,
> > > Som Shekhar Sharma
> > >
> > >
> > > On Thu, Apr 26, 2012 at 12:35 PM, shekhar sharma <
> shekhar2...@gmail.com> wrote:
> > > Hello,
> > > While using memory channel i am getting the following error, what
> could be the reason for this:
> > > org.apache.flume.ChannelException: Space for commit to queue couldn't
> be acquired Sinks are likely not keeping up with sources, or the buffer
> size is too tight
> > >
> > >
> > >
> org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:84)
> > >         at
> org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
> > >         at
> org.apache.flume.channel.ChannelProcessor.processEvent(ChannelProcessor.java:178)
> > >         at
> org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:267)
> > >         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> > >         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> > >         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> > >         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> > >         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> > >         at java.lang.Thread.run(Thread.java:662)
> > >
> > >
> > > Regards,
> > > Som
> > >
> >
> >
> >
>
>

Reply via email to