Wouldn't it be a better choice to store the logs offline somewhere? HDFS and S3
are both good choices...
-Mark
On Feb 27, 2015, at 16:12, Warren Kiser war...@hioscar.com wrote:
Does anyone know how to achieve unlimited log retention either globally or
on a per topic basis? I tried
Just to be clear: this is going to be exposed via some Api the clients can call
at startup?
On Nov 12, 2014, at 08:59, Guozhang Wang wangg...@gmail.com wrote:
Sounds great, +1 on this.
On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira gshap...@cloudera.com wrote:
So it looks like we can
.
On Wed, Nov 12, 2014 at 9:09 AM, Mark Roberts wiz...@gmail.com
wrote:
Just to be clear: this is going to be exposed via some Api the clients
can call at startup?
On Nov 12, 2014, at 08:59, Guozhang Wang wangg...@gmail.com
wrote:
Sounds great, +1
I think it will depend on how your producer application logs things, but
yes I have historically seen exceptions in the producer logs when they
exceed the max message size.
-Mark
On Mon, Oct 27, 2014 at 10:19 AM, Chen Wang chen.apache.s...@gmail.com
wrote:
Hello folks,
I recently noticed our
If you use Kafka for the first bulk load, you will test your new
Teradata-Kafka-Hive pipeline, as well as have the ability to blow away
the data in Hive and reflow it from Kafka without an expensive full
re-export from Teradata. As for whether Kafka can handle hundreds of GB of
data: Yes,
Did this mailing list ever get created? Was there consensus that it did or
didn't need created?
-Mark
On Jul 18, 2014, at 14:34, Jay Kreps jay.kr...@gmail.com wrote:
A question was asked in another thread about what was an effective way
to contribute to the Kafka project for people who
When we were in testing phase, we would either create a new topic with the
correct details or shut the cluster down and hard kill the topic in zookeeper +
local disk. In prod we have the cluster configured via configuration
management and auto create turned off.
The ability to delete a topic