[jira] [Created] (KAFKA-12159) kafka-console-producer prompt should redisplay after an error output

2021-01-07 Thread Victoria Bialas (Jira)
Victoria Bialas created KAFKA-12159:
---

 Summary: kafka-console-producer prompt should redisplay after an 
error output
 Key: KAFKA-12159
 URL: https://issues.apache.org/jira/browse/KAFKA-12159
 Project: Kafka
  Issue Type: Bug
  Components: producer , tools
 Environment: Mac OSX Catalina 10.15.7, iTerm, Linux
Reporter: Victoria Bialas


*BLUF:* The kafka-console-producer should return its prompt after outputting an 
error message (if it is still running, which in most cases, it is). Current 
behaviour is it doesn't return a prompt, and hitting return at that point shuts 
it down.

*DETAIL AND EXAMPLE:* The console producer utility behaves in a less than 
optimal way when you get an error. It doesn’t return the producer prompt even 
though the producer is still running and accessible. If you hit return after 
the error, it shuts down the producer, forcing you to restart it when in fact 
that wasn’t necessary.

This makes it confusing to demo to users in Docs how to test producers and 
consumers in scenarios where an error is generated. (I am adding a tip o write 
around this which will be published soon. It will be at the end of step 7 in 
[Demo: Enabling Schema Validation on a Topic at the Command 
Line|http://example.com].

Here is an example from Confluent Platform. The scenario has you try to send a 
message with schema validation on (which will fail due to the message format), 
then disable schema validation and resend the message or another in a similar 
format, which should then succeed.
 # With Confluent "schema validation" on, try to send a message that doesn't 
conform to the schema defined for a topic.
*Producer*

{code:java}
Last login: Wed Jan  6 17:51:01 on ttys004
Vickys-MacBook-Pro:~ vicky$ kafka-console-producer --broker-list localhost:9092 
--topic test-schemas --property parse.key=true --property key.separator=,
>1,my first record
>2,my second record
>[2021-01-06 18:25:08,722] ERROR Error when sending message to topic 
>test-schemas with key: 1 bytes, value: 16 bytes with error: 
>(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.InvalidRecordException: This record has failed the 
validation on broker and hence will be rejected.

org.apache.kafka.common.KafkaException: No key found on line 3:
at 
kafka.tools.ConsoleProducer$LineMessageReader.readMessage(ConsoleProducer.scala:290)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:51)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)

Vickys-MacBook-Pro:~ vicky$ kafka-console-producer --broker-list localhost:9092 
--topic test-schemas --property parse.key=true --property key.separator=,
>3,my third record
>

{code}
You can see that you lose the producer prompt after the error and get only a 
blank line, which leads you to believe you've lost the producer (it's actually 
still running). If you hit return, the producer shuts down, forcing you to 
restart the producer to continue (when in fact, this isn't necessary.)

*Consumer*
The consumer still shows only a previous message that was sent when schema 
validation was disabled earlier in the demo.

{code:java}
Vickys-MacBook-Pro:~ vicky$ kafka-console-consumer --bootstrap-server 
localhost:9092 --from-beginning --topic test-schemas --property 
print.key=trueVickys-MacBook-Pro:~ vicky$ kafka-console-consumer 
--bootstrap-server localhost:9092 --from-beginning --topic test-schemas 
--property print.key=true1
my first record
{code}

 # If instead, after disabling schema validation in a different shell, you 
return to the producer window and copy-paste or type the message on the blank 
line following the error, and then hit return, the message will send, and you 
will see it in the running consumer.
*Producer*

{code:java}
Vickys-MacBook-Pro:~ vicky$ kafka-console-producer --broker-list localhost:9092 
--topic test-schemas --property parse.key=true --property key.separator=,
>1,my first record
>2,my second record
>[2021-01-07 11:35:30,443] ERROR Error when sending message to topic 
>test-schemas with key: 1 bytes, value: 16 bytes with error: 
>(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.InvalidRecordException: One or more records have been 
rejected
3,my third record
>
{code}
*Consumer*

{code:java}
Vickys-MacBook-Pro:~ vicky$ kafka-console-consumer --bootstrap-server 
localhost:9092 --from-beginning --topic test-schemas --property 
print.key=trueVickys-MacBook-Pro:~ vicky$ kafka-console-consumer 
--bootstrap-server localhost:9092 --from-beginning --topic test-schemas 
--property print.key=true1
my first record3
my third record{code}

If the prompt was simply shown again after the error, it would solve the 
usability problem.

cc: [~mjsax], [~guozhang], [~gshapira_impala_35cc], [~abhishekd.i...@gmail.com]

 



--
This message was sent by 

[jira] [Resolved] (KAFKA-6453) Reconsider timestamp propagation semantics

2020-06-26 Thread Victoria Bialas (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victoria Bialas resolved KAFKA-6453.

Resolution: Fixed

Fixed by James Galasyn in https://github.com/apache/kafka/pull/8920

> Reconsider timestamp propagation semantics
> --
>
> Key: KAFKA-6453
> URL: https://issues.apache.org/jira/browse/KAFKA-6453
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Victoria Bialas
>Priority: Major
>  Labels: needs-kip
>
> Atm, Kafka Streams only has a defined "contract" about timestamp propagation 
> at the Processor API level: all processor within a sub-topology, see the 
> timestamp from the input topic record and this timestamp will be used for all 
> result record when writing them to an topic, too.
> The DSL, inherits this "contract" atm.
> From a DSL point of view, it would be desirable to provide a different 
> contract to the user. To allow this, we need to do the following:
>  - extend Processor API to allow manipulation timestamps (ie, a Processor can 
> set a new timestamp for downstream records)
>  - define a DSL "contract" for timestamp propagation for each DSL operator
>  - document the DSL "contract"
>  - implement the DSL "contract" using the new/extended Processor API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9304) Image on Kafka docs shows incorrect message ID segments

2019-12-16 Thread Victoria Bialas (Jira)
Victoria Bialas created KAFKA-9304:
--

 Summary: Image on Kafka docs shows incorrect message ID segments
 Key: KAFKA-9304
 URL: https://issues.apache.org/jira/browse/KAFKA-9304
 Project: Kafka
  Issue Type: Bug
Reporter: Victoria Bialas


 

Docs page: [https://kafka.apache.org/documentation/#log]

Link to Tweet: [https://twitter.com/Preety48408391/status/1205764249995202560]

Hi Kafka team, looks like there is issue with below depicting image on Kafka 
documentation section 5.4. In 2nd segment 82xx.kafka, Message IDs are 
incorrectly mentioned. Message should start from 82xx but starting from 34xx 
like in 1st segment. Please correct.

 

[~mjsax] if you will assign this to me, I'll try to fix on the docs. May need 
some guidance from you, as the problem description isn't completely clear to me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)