Jenkins build is back to normal : kafka-trunk-jdk8 #4495

2020-05-02 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk14 #51

2020-05-02 Thread Apache Jenkins Server
See 




Permission to create a KIP

2020-05-02 Thread Aakash Shah
Hello,

I would like to request permission to create a KIP.

My Wiki ID is aakash33 and my email is as...@confluent.io.

Thank you!

Best,

Aakash Shah


Re: [DISCUSS] KIP-589 Add API to Update Replica State in Controller

2020-05-02 Thread Tom Bentley
Hi David,

> In the rejecting the alternative of having an RPC for log dir failures
> you say
>
> I guess what I really mean here is that I wanted to avoid exposing the
> notion of a log dir to the controller. I can update the description to
> reflect this.
>

Ah, I think I see now. While each broker knows about its log dirs this
isn't something that's stored in zookeeper or known to the controller.


> > It's also not completely clear that the cost of having to enumerate all
> the partitions on a log dir was weighed against the perceived benefit of a
> more flexible RPC.
>
> The enumeration isn't strictly required. In the "RPC semantics" section, I
> mention that if no topics are present in the RPC request, then all topics
> on the broker are implied. And if a topic is given with no partitions, all
> partitions for that topic (on the broker) are implied. Does this make
> sense?
>

So the no-topics-present optimisation wouldn't be available to a broker
with >1 log dirs where only some of the log dirs failed. I don't suppose
that's a problem though.

Thanks again,

Tom


On Fri, May 1, 2020 at 5:48 PM David Arthur  wrote:

> Jose/Colin/Tom, thanks for the feedback!
>
> > Partition level errors
>
> This was an oversight on my part, I meant to include these in the response
> RPC. I'll update that.
>
> > INVALID_REQUEST
>
> I'll update this text description, that was a copy/paste left over
>
> > I think we should mention that the controller will keep it's current
> implementation of marking the replicas as offline because of failure in the
> LeaderAndIsr response.
>
> Good suggestions, I'll add that.
>
> > Does EventType need to be an Int32?
>
> No, it doesn't. I'll update to Int8. Do we have an example of the enum
> paradigm in our RPC today? I'm curious if we actually map it to a real Java
> enum in the AbstractRequest/Response classes.
>
> > AlterReplicaStates
>
> Sounds good to me.
>
> > In the rejecting the alternative of having an RPC for log dir failures
> you say
>
> I guess what I really mean here is that I wanted to avoid exposing the
> notion of a log dir to the controller. I can update the description to
> reflect this.
>
> > It's also not completely clear that the cost of having to enumerate all
> the partitions on a log dir was weighed against the perceived benefit of a
> more flexible RPC.
>
> The enumeration isn't strictly required. In the "RPC semantics" section, I
> mention that if no topics are present in the RPC request, then all topics
> on the broker are implied. And if a topic is given with no partitions, all
> partitions for that topic (on the broker) are implied. Does this make
> sense?
>
> Thanks again! I'll update the KIP and leave a message here once it's
> revised.
>
> David
>
> On Wed, Apr 29, 2020 at 11:20 AM Tom Bentley  wrote:
>
> > Hi David,
> >
> > Thanks for the KIP!
> >
> > In the rejecting the alternative of having an RPC for log dir failures
> you
> > say:
> >
> > It was also rejected to prevent "leaking" the notion of a log dir to the
> > > public API.
> > >
> >
> > I'm not quite sure I follow that argument, since we already have RPCs for
> > changing replica log dirs. So in a general sense log dirs already exist
> in
> > the API. I suspect you were using public API to mean something more
> > specific; could you elaborate?
> >
> > It's also not completely clear that the cost of having to enumerate all
> the
> > partitions on a log dir was weighed against the perceived benefit of a
> more
> > flexible RPC. (I'm sure it was, but it would be good to say so).
> >
> > Many thanks,
> >
> > Tom
> >
> > On Wed, Apr 29, 2020 at 12:04 AM Colin McCabe 
> wrote:
> >
> > > Hi David,
> > >
> > > Thanks for the KIP!
> > >
> > > I think the ReplicaStateEventResponse should have a separate error code
> > > for each partition.
> > >  Currently it just has one error code for the whole request/response,
> if
> > > I'm reading this right.  I think Jose made a similar point as well.  We
> > > should plan for scenarios where some replica states can be changed and
> > some
> > > can't.
> > >
> > > Does EventType need to be an Int32?  For enums, we usually use the
> > > smallest reasonable type, which would be Int8 here.  We can always
> change
> > > the schema later if needed.  UNKNOWN_REPLICA_EVENT_TYPE seems
> unnecessary
> > > since INVALID_REQUEST covers this case.
> > >
> > > I'd also suggest "AlterReplicaStates[Request,Response]" as a slightly
> > > better name for this RPC.
> > >
> > > cheers,
> > > Colin
> > >
> > >
> > > On Tue, Apr 7, 2020, at 12:43, David Arthur wrote:
> > > > Hey everyone,
> > > >
> > > > I'd like to start the discussion for KIP-589, part of the KIP-500
> > effort
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-589+Add+API+to+update+Replica+state+in+Controller
> > > >
> > > > This is a proposal to use a new RPC instead of ZooKeeper for
> notifying
> > > the
> > > > controller of an offline replica. Please give a read and let me know

Kafka Connect - AvroConverter - Unions with Enums Problem

2020-05-02 Thread Nagendra Korrapati
Hello

Sorry to write this in this mailing list. This is more specific to my problem 
in using Kafka Connect

I have Union with Enum in it. ToConnectSchema converted it into Schema.STRUCT 
type with fields and the enum field as STRING. 

The SpeicficRecord has that enum field as of Type Enum class. 

But in AvroData class the method isInstanceOfAvroSchemaTypeForSimpleSchema 
compares value is of  CharSequence instance. 

But in the SpecifiRecord the value type is Enum and it fails. So the end result 
the toConnectData throws exception with the message 

“Did not find matching union field for data : enumstring”

Please some one comment!!

thanks
Nagendra

Re: [DISCUSS] KIP-598: Augment TopologyDescription with store and source / sink serde information

2020-05-02 Thread John Roesler
Hi all,

I’ve been sitting on another concern about this proposal. Since Matthias has 
just submitted a few questions, perhaps I can pile on two more this round. 

(10) Can we avoid coupling this KIP’s behavior to the choice of ‘build’ method? 
I.e., can we return the improved description even when people just call 
‘build()’?

Clearly, we need a placeholder if no serde is specified. How about “unknown”, 
or the name of the config keys, “default.key.serde”/“default.value.serde”?

I still have some deep reservation about the ‘build(Parameters)’ method itself. 
I don’t really want to side-track this conversation with all my concerns if we 
can avoid it, though. It seems like justification enough that introducing 
dramatically different behavior based in on seemingly minor differences in api 
calls will be a source of mystery and complexity for users.

I.e., I’m characterizing a completely different string format as “dramatically 
different”, as opposed to just having a placeholder string.

(11) Regarding the wrapper serdes, I bet we can capture and print the inner 
types as well. 

Thanks again for the KIP!
-John



On Thu, Apr 30, 2020, at 19:10, Matthias J. Sax wrote:
> Guozhang,
> 
> thanks for the KIP!
> 
> Couple of comments/questions.
> 
> (1) In the new TopologyDescription output, the line for the
> windowed-count processor is:
> 
> >  Processor: myname (stores: [(myname-store, serdes: [SessionWindowedSerde, 
> > FullChangeSerde])])
> 
> For this case, both Serdes are wrappers and user would actually only
> specified wrapped Serdes for the key and value. Can we do anything about
> this? Otherwise, there might still be a runtime `ClassCastException`
> that a user cannot easily debug.
> 
> 
> (2) Nit: The JavaDocs of `Processor#storeSet()` seems to be incorrect
> (it says "The names of all connected stores." -- guess it's c&p error)?
> 
> 
> (3) The KIP mentioned to add `Store#changelogTopic()` method, but the
> output of `TopologyDescription#toString()` does not contain it. I think
> it might be good do add it, too?
> 
> 
> (4) The KIP also list https://issues.apache.org/jira/browse/KAFKA-9913
> but it seems not to address it yet?
> 
> 
> (5) As John, I also noticed that `List Store#sedersNames()` is
> not a great API. I am not sure if I understand your reply thought.
> AFAIK, there is no exiting API
> 
> > List StoreBuilder#serdes()
> 
> (cf
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/StoreBuilder.java)
> 
> 
> (6) Atm, we return `String` type for the Serdes. Do we think it's
> sufficient? Just want to double check.
> 
> 
> 
> -Matthias
> 
> 
> 
> 
> On 4/25/20 1:24 PM, Guozhang Wang wrote:
> > Hi John,
> > 
> > Thanks for the review! Replied inline.
> > 
> > On Fri, Apr 24, 2020 at 8:09 PM John Roesler  wrote:
> > 
> >> Hi Guozhang,
> >>
> >> Thanks for the KIP! I took a quick look, and I'm really happy to see this
> >> underway.
> >>
> >> Some quick questions:
> >>
> >> 1.  Can you elaborate on the reason that stores just have a list of
> >> serdes, whereas
> >> other components have an explicit key/value serde?
> >>
> > 
> > This is because of the existing API "List StoreBuilder#serdes()".
> > Although both of its implementations would return two serdes (one for key
> > and one for value), the API is more general to return a list. And hence the
> > TopologyDescription#Store which gets them directly from StoreBuilder is
> > exposing the same API.
> > 
> > 1.5. A side-effect of this seems to be that the string-formatted serde
> >> description is
> >> different, depending on whether the serdes are listed on a store or a
> >> topic. Just an
> >> observation.
> >>
> > 
> > Yes I agree. I think we can probably change the "List
> > StoreBuilder#serdes()" signature as well (which would be a breaking change
> > though, so we should do that via deprecation), but I'm a bit concerned
> > since it was designed for future store types which may not be of K-V format
> > any more.
> > 
> > 
> >> 2. You mentioned the key compatibility concern in my mind. We do know that
> >> such
> >> use cases exist. Namely, our own tests and
> >> https://zz85.github.io/kafka-streams-viz/
> >> I'm wondering if we can add a forward-compatible machine-readable format
> >> to the
> >> KIP, so that even though we must break the parsers right now, maybe we'll
> >> never
> >> have to break them again. For example, I'm thinking of a "toJson" method
> >> on the
> >> TopologyDescription that formats the entire topology description as a json
> >> string.
> >>
> >>
> > Yes, I also have concerns about that (as described in the compatibility
> > section). One proposal I have is that we ONLY augment the toString result
> > if the TopologyDescription is from a Topology built from
> > `StreamsBuilder#build(Properties)`, which is only recently added and hence
> > most old usage would not get the benefits of it. But after thinking about
> > this a bit more, I'm now more inclined to just always au

Build Issue

2020-05-02 Thread Dulvin Witharane
Hi,

I'm getting the following build issue when i load the project in IDEA.

Build file '/Users/dulvin/Documents/Work/git/kafka/build.gradle' line: 457

A problem occurred evaluating root project 'kafka'.
> Could not create task ':clients:spotbugsMain'.
   > Could not create task of type 'SpotBugsTask'.
  > Could not create an instance of type
com.github.spotbugs.internal.SpotBugsReportsImpl.
 >
org.gradle.api.reporting.internal.TaskReportContainer.(Ljava/lang/Class;Lorg/gradle/api/Task;)

Can anyone help me to get this fixed?

I have created the following issue in jira[1].
I have found a reference in SpotBugs repo[2] as well.


[1]https://issues.apache.org/jira/browse/KAFKA-9948
[2]https://github.com/spotbugs/spotbugs-gradle-plugin/issues/120

Thanks and Regards,

-- 
*Witharane, D.R.H.*

Software Engineer, WSO2 Inc,
Colombo 03
Mobile : *+94 7 <%2B94%2071%201127241>7 6746781*
Skype  : dulvin.rivindu
Facebook  | LinkedIn



[jira] [Created] (KAFKA-9948) Gradle Issue

2020-05-02 Thread Dulvin Witharane (Jira)
Dulvin Witharane created KAFKA-9948:
---

 Summary: Gradle Issue
 Key: KAFKA-9948
 URL: https://issues.apache.org/jira/browse/KAFKA-9948
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 2.4.1
 Environment: gradle -v


Gradle 6.0.1


Build time:   2019-11-18 20:25:01 UTC
Revision: fad121066a68c4701acd362daf4287a7c309a0f5

Kotlin:   1.3.50
Groovy:   2.5.8
Ant:  Apache Ant(TM) version 1.10.7 compiled on September 1 2019
JVM:  1.8.0_152 (Oracle Corporation 25.152-b16)
OS:   Mac OS X 10.15.4 x86_64
Reporter: Dulvin Witharane


Can't get Gradle to build kafka.

 

Build file '/Users/dulvin/Documents/Work/git/kafka/build.gradle' line: 457

A problem occurred evaluating root project 'kafka'.
> Could not create task ':clients:spotbugsMain'.
 > Could not create task of type 'SpotBugsTask'.
 > Could not create an instance of type 
 > com.github.spotbugs.internal.SpotBugsReportsImpl.
 > org.gradle.api.reporting.internal.TaskReportContainer.(Ljava/lang/Class;Lorg/gradle/api/Task;)V

 

The above error is thrown



--
This message was sent by Atlassian Jira
(v8.3.4#803005)