Kafka version: 0.10.0
Exception Trace
java.util.NoSuchElementException
at kafka.utils.IteratorTemplate.next(IteratorTemplate.scala:37)
at kafka.log.LogSegment.recover(LogSegment.scala:189)
at kafka.log.Log.recoverLog(Log.scala:268)
at kafka.log.Log.loadSegments(Log.scala:243)
We had a Kafka 0.9 consumer stuck in the epoll native call under the
following circumstances.
1. It was started bootstrapped with a cluster with 3 brokers A, B and C
with ids 1,2,3.
2. Change the assignment of the brokers to some topic partitions. Seek to
the beginning of each topic partition.
3.
Can you paste the entire exception stacktrace please?
-Jaikiran
On Tuesday 30 August 2016 11:23 AM, Gaurav Agarwal wrote:
Hi there, just wanted to bump up the thread one more time to check if
someone can point us in the right direction... This one was quite a serious
failure that took down many
Hi there, just wanted to bump up the thread one more time to check if
someone can point us in the right direction... This one was quite a serious
failure that took down many of our kafka brokers..
On Sat, Aug 27, 2016 at 2:11 PM, Gaurav Agarwal
wrote:
> Hi All,
>
> We are facing a weird problem
Michael, Thanks for your help.
Take the word count example, I am trying to walk through the code based on
your explanation:
val textLines: KStream[String, String] = builder.stream("input-topic")
val wordCounts: KStream[String, JLong] = textLines
.flatMapValues(_.toLowerCase.split("\
Hello all
how is it possible to set the consumer reread logs? I think I have to
return offset back of the current offset?
Thanks a lot
Hello,
It seems to me that there are two different log4jappender classes:
1. Apache Kafka:
https://github.com/apache/kafka/blob/6eacc0de303e4d29e083b89c1f53615c1dfa291e/log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
2. Log4J2:
https://github.co
Hi Jun,
I just created KAFKA-4099 and will submit patch soon.
Thanks,
Jiangjie (Becket) Qin
On Mon, Aug 29, 2016 at 11:55 AM, Jun Rao wrote:
> Jiangjie,
>
> Good point on the time index format related to uncompressed messages. It
> does seem that indexing based on file position requires a bit
Jiangjie,
Good point on the time index format related to uncompressed messages. It
does seem that indexing based on file position requires a bit more
complexity. Since the time index is going to be used infrequently, having a
level of indirection doesn't seem a big concern. So, we can leave the lo
Quick reply only, since I am on my mobile. Not an exact answer to your
problem but still somewhat related:
http://www.infolace.com/blog/2016/07/14/simple-spatial-windowing-with-kafka-streams/
(perhaps you have seen this already).
-Michael
On Sun, Aug 28, 2016 at 4:55 AM, Farhon Zaharia
wrote:
Most probably because, in your build.sbt, you didn't enable the
-Xexperimental compiler flag for Scala. This is required when using Scala
2.11 (as you do) to enable SAM for Java 8 lambda support. Because this
compiler flag is not set your build fails because it can translate
`_.toUpperCase()` int
In Kafka Streams, data is partitioned according to the keys of the
key-value records, and operations such as countByKey operate on these
stream partitions. When reading data from Kafka, these stream partitions
map to the partitions of the Kafka input topic(s), but these may change
once you add pro
Hi,
RequestTimeout is used for 2 cases :
1) Timing out the batches sitting in the accumulator.
2) Requests that are already sent over the wire and you have not yet heard
from the server.
In a case where there is a network partition, the client might not detect
it, till the actual TCP timeout that
Hi All,
In DefaultPartitioner implementation, when key is null, we get the
partition number by modulo of available partitions. Below is the code
snippet.
if (availablePartitions.size() > 0)
{ int part = Utils.toPositive(nextValue) % availablePartitions.size();
return availablePartitions.get(part).
Hi all,
Any opinions about monitoring the records-lag-max for a kafka streams job?
Thanks,
Rohit
On 8/26/16, 2:53 PM, "Rohit Valsakumar" wrote:
>Hi all,
>
>I want to monitor the max lag of a kafka streams job which is consuming
>from three topics and to do that I have implemented the MetricsRe
how many brokers you've in this cluster. Do you try using a stable
zookeeper release like 3.4.8?
-Harhsa
On Mon, Aug 29, 2016 at 5:21 AM Nomar Morado wrote:
> we are using kafka 0.9.0.1 and zk 3.5.0-alpha
>
> On Mon, Aug 29, 2016 at 8:12 AM, Nomar Morado
> wrote:
>
> > we would get this occasio
Jan,
For the usefulness of time index, it's ok if you don't plan to use it.
However, I do think there are other people who will want to use it. Fixing
an application bug always requires some additional work. Intuitively, being
able to seek back to a particular point of time for replay is going to
Thanks Dhiraj!
On 8/28/16, 1:57 AM, "dhiraj prajapati" wrote:
>It is per partition
>
>On Aug 27, 2016 3:10 AM, "Amit Karyekar" wrote:
>
>> Hi,
>>
>> We’re using Kafka 0.9
>>
>> Wanted to check whether log.retention.bytes works on per partition basis
>> or is it cumulative of all partitions?
Hi Krishna,
Thank you for your response.
Connections already made but if we increase the request timeout 5 times let's
say request.timeout.ms= 5*6 , then the number of 'Batch Expired '
exception is less, so what is the recommended value for ' request.timeout.ms '.
If we increase more, is th
Found the issue. The zookeeper listen port was not open to firewall.
On Monday, August 29, 2016 8:55 AM, Tech Bolek
wrote:
Here is the scenario:
- My kafka server and the zookeeper are running and working fine on the
remote server as long as I launch the process on the same remot
Hi,
For "word count" example in Hadoop, there are shuffle-sort-and-reduce
phases that handles outputs from different mappers, how does it work in
KStream ?
Here is the scenario:
- My kafka server and the zookeeper are running and working fine on the
remote server as long as I launch the process on the same remote server.
- I don't have any connectivity issues between my local machine and the
server. I can ssh and access all other applicat
Hi all
Have a question about the scenario where consumer that is consuming Kafka
records is not very fast (regardless of the reason). And yes I know about
certain configuration properties on both server and consumer which help with
mitigating the effects, so I just simply want to confirm that w
we are using kafka 0.9.0.1 and zk 3.5.0-alpha
On Mon, Aug 29, 2016 at 8:12 AM, Nomar Morado
wrote:
> we would get this occasionally after a weekend reboot/restart.
>
> we tried restarting a couple of times all to naught.
>
> we had to delete dk's directory to get his going again.
>
> any ideas w
we would get this occasionally after a weekend reboot/restart.
we tried restarting a couple of times all to naught.
we had to delete dk's directory to get his going again.
any ideas what might cause this issue and suggestions on how to resolve
this?
thanks.
Two out of three of our Kafka nodes have become unrecoverable due to disk
corruption. I launched two new nodes, but they got new broker_id's.
For redistributing the topics across the cluster, I ran the command:
---
/opt/kafka/bin/kafka-reassign-partitions.sh --broker-list
"1003,1005,1006,1007" --e
26 matches
Mail list logo