Re: Monitoring Kafka

2018-04-21 Thread Steve Jang
The following tool is really good:
https://github.com/yahoo/kafka-manager


On Sat, Apr 21, 2018 at 5:42 AM, Joris Meijer  wrote:

> You can do this without exposing the JMX port, e.g. by using a Prometheus
> exporter as javaagent (https://github.com/prometheus/jmx_exporter).
>
> Metricsreporters, such as the one from Confluent, also don't require you to
> open ports, because metrics will be pushed out of the broker (
> https://docs.confluent.io/current/kafka/metrics-
> reporter/metrics-reporter.html
> ).
>
> Joris
>
> On Sat, Apr 21, 2018, 14:01 Rahul Singh 
> wrote:
>
> > Without JMX may be difficult.. why not install an agent and report to an
> > external service like ELK or new Relic?
> >
> > That’s long standing industry pattern.
> >
> > Some reading.. and some tools in the readings.. these articles are
> > opinionated towards the vendors that published them but its a starting
> > point.
> >
> > https://blog.serverdensity.com/how-to-monitor-kafka/
> > https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/
> >
> >
> > On Apr 21, 2018, 6:54 AM -0400, Raghu Arur , wrote:
> > > Hi,
> > >
> > > Is there a way to pull broker stats (like partitions its is managing,
> jvm
> > > info, state of the partitions, etc.) without using JMX. We are shipping
> > > kafka in a appliance and there are restrictions on the ports that are
> > open
> > > for security reasons. Are there any known ways of monitoring the health
> > of
> > > Kafka ?
> > >
> > > Thanks,
> > > Raghu.
> >
>


Re: timestamp-oriented API

2018-02-19 Thread Steve Jang
If you set *message.timestamp.type* (or *log.message.timestamp.type*) to be
LogAppendTime, this would make sense.

I am new to Kafka, too, and if this was set to CreateTime, I don't know
what the behavior would be.  There is *message.timestamp.difference.max.ms
* setting too, so there seem to
be certain "boundedness" of how much clock skew is allowed between the
producer and the broker, so you could implement various types of policies
(min, max, etc) for this API.


On Mon, Feb 19, 2018 at 7:36 AM, Xavier Noria  wrote:

> In the mental model I am building of how Kafka works (new to this), the
> broker keeps offsets by consumer group, and individual consumers basically
> depend on the offset of the consumer group they join. Also consumer groups
> may opt to start from the beginning.
>
> OK, in that mental model there is a linearization of messages per
> partition. As the documentation says, there is a total order per partition,
> and the order is based on the offset, unrelated to the timestamp.
>
> But I see the Java library has timestamp-oriented methods like:
>
>
> https://kafka.apache.org/0102/javadoc/org/apache/kafka/
> clients/consumer/Consumer.html#offsetsForTimes(java.util.Map)
>
> How does that make sense given the model described above? How is that
> implemented? Does the broker has builtin support for this? What happens if
> due to race conditions or machines with clocks out of sync you have
> messages with timestamps interleaved?
>
> Could anyone concile that API with the intrinsec offset-based contract?
>



-- 




*Steve JangPRINCIPAL ENGINEER Mobile +1.206.384.2999  |
Support +1.800.340.9194
*