[
https://issues.apache.org/jira/browse/KAFKA-820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13609124#comment-13609124
]
Jun Rao commented on KAFKA-820:
-------------------------------
Thanks for patch v3. A few minor comments. Once addressed, the patch can be
checked in.
30. AdminUtil.getBrokerInfoFromCache(): The following line uses double negation
brokerMetadata.filterNot(!_.isDefined).map(_.get)
It will be easier to read if we do
brokerMetadata.filter(_.isDefined).map(_.get)
31. AdminUtil.fetchTopicMetadataFromZk():
31.1 In the following statement, should the catch clause be moved to after the
first two statements inside try? Otherwise, we will unnecessarily wrap a
ReplicaNotAvailableException over another ReplicaNotAvailableException.
try {
replicaInfo = getBrokerInfoFromCache(zkClient, cachedBrokerInfo,
replicas.map(id => id.toInt))
isrInfo = getBrokerInfoFromCache(zkClient, cachedBrokerInfo,
inSyncReplicas)
if(replicaInfo.size < replicas.size)
throw new ReplicaNotAvailableException("Replica information not
available for following brokers: " +
replicas.filterNot(replicaInfo.map(_.id).contains(_)).mkString(","))
if(isrInfo.size < inSyncReplicas.size)
throw new ReplicaNotAvailableException("In Sync Replica
information not available for following brokers: " +
inSyncReplicas.filterNot(isrInfo.map(_.id).contains(_)).mkString(","))
} catch {
case e => throw new ReplicaNotAvailableException(e)
}
31.2 Similarly, in the following try/catch clause, should we move the try/catch
clause inside case Some(l)? Otherwise, we will be wrapping a
LeaderNotAvailableException over another LeaderNotAvailableException.
try {
leaderInfo = leader match {
case Some(l) => Some(getBrokerInfoFromCache(zkClient,
cachedBrokerInfo, List(l)).head)
case None => throw new LeaderNotAvailableException("No leader
exists for partition " + partition)
}
} catch {
case e => throw new LeaderNotAvailableException("Leader not
available for topic %s partition %d".format(topic, partition), e)
}
> Topic metadata request handling fails to return all metadata about replicas
> ---------------------------------------------------------------------------
>
> Key: KAFKA-820
> URL: https://issues.apache.org/jira/browse/KAFKA-820
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 0.8
> Reporter: Neha Narkhede
> Assignee: Neha Narkhede
> Priority: Blocker
> Labels: kafka-0.8
> Attachments: kafka-820-v1.patch, kafka-820-v2.patch,
> kafka-820-v3.patch
>
>
> The admin utility that fetches topic metadata needs improvement in error
> handling. While fetching replica and isr broker information, if one of the
> replicas is offline, it fails to fetch the replica and isr info for the rest
> of them. This creates confusion on the client since it seems to the client
> the rest of the brokers are offline as well.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira