[jira] [Created] (KAFKA-16101) Kafka cluster unavailable during KRaft migration rollback procedure

2024-01-09 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-16101:
--

 Summary: Kafka cluster unavailable during KRaft migration rollback 
procedure
 Key: KAFKA-16101
 URL: https://issues.apache.org/jira/browse/KAFKA-16101
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.6.1
Reporter: Paolo Patierno


Hello,

I was trying the KRaft migration rollback procedure locally and I came across a 
potential bug or anyway a situation where the cluster is not usable/available 
for a certain amount of time.

In order to test the procedure, I start with a one broker and one zookeeper 
node cluster. Then I start the migration with a one KRaft controller node. The 
migration runs fine and it reaches the point of "dual write" state.

>From this point, I try to run the rollback procedure as described in the 
>documentation.

As first step, this involves ...
 * stopping the broker
 * removing the __cluster_metadata folder
 * removing ZooKeeper migration flag and controller(s) related configuration 
from the broker
 * restarting the broker

With the above steps done, the broker starts in ZooKeeper mode (no migration, 
no KRaft controllers knowledge) and it keeps logging the following messages in 
DEBUG:
{code:java}
[2024-01-08 11:51:20,608] DEBUG 
[zk-broker-0-to-controller-forwarding-channel-manager]: Controller isn't 
cached, looking for local metadata changes 
(kafka.server.BrokerToControllerRequestThread)
[2024-01-08 11:51:20,608] DEBUG 
[zk-broker-0-to-controller-forwarding-channel-manager]: No controller provided, 
retrying after backoff (kafka.server.BrokerToControllerRequestThread)
[2024-01-08 11:51:20,629] DEBUG 
[zk-broker-0-to-controller-alter-partition-channel-manager]: Controller isn't 
cached, looking for local metadata changes 
(kafka.server.BrokerToControllerRequestThread)
[2024-01-08 11:51:20,629] DEBUG 
[zk-broker-0-to-controller-alter-partition-channel-manager]: No controller 
provided, retrying after backoff (kafka.server.BrokerToControllerRequestThread) 
{code}
What's happening should be clear.

The /controller znode in ZooKeeper still reports the KRaft controller 
(broker.id = 1) as controller. The broker get it from the znode but doesn't 
know how to reach it.

The issue is that until the procedure is complete with the next steps (shutting 
down KRaft controller, deleting /controller znode), the cluster is unusable. 
Any admin or client operation against the broker doesn't work, just hangs, the 
broker doesn't reply.

Imagining this scenario to a more complex one with 10-20-50 brokers and 
partitions' replicas spread across them, when the brokers are rolled one by one 
(in ZK mode) reporting the above error, the topics will become not available 
one after the other, until all brokers are in such a state and nothing can 
work. This is because from a KRaft controller perspective (still running), the 
brokers are not available anymore and the partitions' replicas are out of sync.

Of course, as soon as you complete the rollback procedure, after deleting the 
/controller znode, the brokers are able to elect a new controller among them 
and everything recovers to work.

My first question ... isn't the cluster supposed to work during rollback and 
being always available during the rollback when the procedure is not completed 
yet? Or having the cluster not available is an assumption during the rollback, 
until it's complete?

This "unavailability" time window could be reduced by deleting the /controller 
znode before shutting down the KRaft controllers to allow the brokers electing 
a new controller among them, but in this case, could be a race condition where 
KRaft controllers still running could steal leadership again?

Or is there anything missing in the documentation maybe which is driving to 
this problem?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16005) ZooKeeper to KRaft migration rollback missing disabling controller and migration configuration on brokers

2023-12-13 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-16005:
--

 Summary: ZooKeeper to KRaft migration rollback missing disabling 
controller and migration configuration on brokers
 Key: KAFKA-16005
 URL: https://issues.apache.org/jira/browse/KAFKA-16005
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.6.1
Reporter: Paolo Patierno


I was following the latest documentation additions to try the rollback process 
of a ZK cluster migrating to KRaft, while it's still in dual-write mode: 
[https://github.com/apache/kafka/pull/14160/files#diff-e4e8d893dc2a4e999c96713dd5b5857203e0756860df0e70fb0cb041aa4d347bR3786]

The first point is just about stopping broker, deleting __cluster_metadata 
folder and restarting broker.

I think it's missing at least the following steps:
 * removing/disabling the ZooKeeper migration flag
 * removing all properties related to controllers configuration (i.e. 
controller.quorum.voters, controller.listener.names, ...)

Without those steps, when the broker restarts, we have got broker re-creating 
the __cluster_metadata folder (because it syncs with controllers while they are 
still running).

Also, when controllers stops, the broker starts to raise exceptions like this:
{code:java}
[2023-12-13 15:22:28,437] DEBUG [BrokerToControllerChannelManager id=0 
name=quorum] Connection with localhost/127.0.0.1 (channelId=1) disconnected 
(org.apache.kafka.common.network.Selector)java.net.ConnectException: Connection 
refusedat java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native 
Method)at 
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at 
org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at 
org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526)   
 at org.apache.kafka.common.network.Selector.poll(Selector.java:481)at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:571)at 
org.apache.kafka.server.util.InterBrokerSendThread.pollOnce(InterBrokerSendThread.java:109)
at 
kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:421)
at 
org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130)[2023-12-13
 15:22:28,438] INFO [BrokerToControllerChannelManager id=0 name=quorum] Node 1 
disconnected. (org.apache.kafka.clients.NetworkClient)[2023-12-13 15:22:28,438] 
WARN [BrokerToControllerChannelManager id=0 name=quorum] Connection to node 1 
(localhost/127.0.0.1:9093) could not be established. Broker may not be 
available. (org.apache.kafka.clients.NetworkClient) {code}
(where I have controller locally on port 9093)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16003) The znode /config/topics is not updated during KRaft migration in "dual-write" mode

2023-12-13 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-16003:
--

 Summary: The znode /config/topics is not updated during KRaft 
migration in "dual-write" mode
 Key: KAFKA-16003
 URL: https://issues.apache.org/jira/browse/KAFKA-16003
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 3.6.1
Reporter: Paolo Patierno


I tried the following scenario ...

I have a ZooKeeper-based cluster and create a my-topic-1 topic (without 
specifying any specific configuration for it). The correct znodes are created 
under /config/topics and /brokers/topics.

I start a migration to KRaft but not moving forward from "dual write" mode. 
While in this mode, I create a new my-topic-2 topic (still without any specific 
config). I see that a new znode is created under /brokers/topics but NOT under 
/config/topics. It seems that the KRaft controller is not updating this 
information in ZooKeeper during the dual-write. The controller log shows that 
the write to ZooKeeper was done, but not everything I would say:
{code:java}
2023-12-13 10:23:26,229 TRACE [KRaftMigrationDriver id=3] Create Topic 
my-topic-2, ID Macbp8BvQUKpzmq2vG_8dA. Transitioned migration state from 
ZkMigrationLeadershipState{kraftControllerId=3, kraftControllerEpoch=7, 
kraftMetadataOffset=445, kraftMetadataEpoch=7, lastUpdatedTimeMs=1702462785587, 
migrationZkVersion=236, controllerZkEpoch=3, controllerZkVersion=3} to 
ZkMigrationLeadershipState{kraftControllerId=3, kraftControllerEpoch=7, 
kraftMetadataOffset=445, kraftMetadataEpoch=7, lastUpdatedTimeMs=1702462785587, 
migrationZkVersion=237, controllerZkEpoch=3, controllerZkVersion=3} 
(org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
[controller-3-migration-driver-event-handler]
2023-12-13 10:23:26,229 DEBUG [KRaftMigrationDriver id=3] Made the following ZK 
writes when handling KRaft delta: {CreateTopic=1} 
(org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
[controller-3-migration-driver-event-handler] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15754) The kafka-storage tool can generate UUID starting with "-"

2023-10-31 Thread Paolo Patierno (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno resolved KAFKA-15754.

Resolution: Not A Problem

> The kafka-storage tool can generate UUID starting with "-"
> --
>
> Key: KAFKA-15754
> URL: https://issues.apache.org/jira/browse/KAFKA-15754
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
>
> Using the kafka-storage.sh tool, it seems that it can still generate a UUID 
> starting with a dash "-", which then breaks how the argparse4j library works. 
> With such an UUID (i.e. -rmdB0m4T4–Y4thlNXk4Q in my case) the tool exits with 
> the following error:
> kafka-storage: error: argument --cluster-id/-t: expected one argument
> Said that, it seems that this problem was already addressed in the 
> Uuid.randomUuid method which keeps generating a new UUID until it doesn't 
> start with "-". This is the commit addressing it 
> [https://github.com/apache/kafka/commit/5c1dd493d6f608b566fdad5ab3a896cb13622bce]
> The problem is that when the toString is called on the Uuid instance, it's 
> going to do a Base64 encoding on the generated UUID this way:
> {code:java}
> Base64.getUrlEncoder().withoutPadding().encodeToString(getBytesFromUuid()); 
> {code}
> Not sure why, but the code is using an URL (safe) encoder which, taking a 
> look at the Base64 class in Java, is using a RFC4648_URLSAFE encoder using 
> the following alphabet:
>  
> {code:java}
> private static final char[] toBase64URL = new char[]{'A', 'B', 'C', 'D', 'E', 
> 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 
> 'U', 'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 
> 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 
> 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '-', '_'}; {code}
> which as you can see includes the "-" character.
> So despite the current Uuid.randomUuid is avoiding the generation of a UUID 
> containing a dash, the Base64 encoding operation can return a final UUID 
> starting with the dash instead.
>  
> I was wondering if there is any good reason for using a Base64 URL encoder 
> and not just the RFC4648 (not URL safe) which uses the common Base64 alphabet 
> not containing the "-".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-15754) The kafka-storage tool can generate UUID starting with "-"

2023-10-31 Thread Paolo Patierno (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno reopened KAFKA-15754:


> The kafka-storage tool can generate UUID starting with "-"
> --
>
> Key: KAFKA-15754
> URL: https://issues.apache.org/jira/browse/KAFKA-15754
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
>
> Using the kafka-storage.sh tool, it seems that it can still generate a UUID 
> starting with a dash "-", which then breaks how the argparse4j library works. 
> With such an UUID (i.e. -rmdB0m4T4–Y4thlNXk4Q in my case) the tool exits with 
> the following error:
> kafka-storage: error: argument --cluster-id/-t: expected one argument
> Said that, it seems that this problem was already addressed in the 
> Uuid.randomUuid method which keeps generating a new UUID until it doesn't 
> start with "-". This is the commit addressing it 
> [https://github.com/apache/kafka/commit/5c1dd493d6f608b566fdad5ab3a896cb13622bce]
> The problem is that when the toString is called on the Uuid instance, it's 
> going to do a Base64 encoding on the generated UUID this way:
> {code:java}
> Base64.getUrlEncoder().withoutPadding().encodeToString(getBytesFromUuid()); 
> {code}
> Not sure why, but the code is using an URL (safe) encoder which, taking a 
> look at the Base64 class in Java, is using a RFC4648_URLSAFE encoder using 
> the following alphabet:
>  
> {code:java}
> private static final char[] toBase64URL = new char[]{'A', 'B', 'C', 'D', 'E', 
> 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 
> 'U', 'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 
> 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 
> 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '-', '_'}; {code}
> which as you can see includes the "-" character.
> So despite the current Uuid.randomUuid is avoiding the generation of a UUID 
> containing a dash, the Base64 encoding operation can return a final UUID 
> starting with the dash instead.
>  
> I was wondering if there is any good reason for using a Base64 URL encoder 
> and not just the RFC4648 (not URL safe) which uses the common Base64 alphabet 
> not containing the "-".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15754) The kafka-storage tool can generate UUID starting with "-"

2023-10-30 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-15754:
--

 Summary: The kafka-storage tool can generate UUID starting with "-"
 Key: KAFKA-15754
 URL: https://issues.apache.org/jira/browse/KAFKA-15754
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Paolo Patierno


Using the kafka-storage.sh tool, it seems that it can still generate a UUID 
starting with a dash "-", which then breaks how the argparse4j library works. 
With such an UUID (i.e. -rmdB0m4T4–Y4thlNXk4Q in my case) the tool exits with 
the following error:
kafka-storage: error: argument --cluster-id/-t: expected one argument
Said that, it seems that this problem was already addressed in the 
Uuid.randomUuid method which keeps generating a new UUID until it doesn't start 
with "-". This is the commit addressing it 
[https://github.com/apache/kafka/commit/5c1dd493d6f608b566fdad5ab3a896cb13622bce]

The problem is that when the toString is called on the Uuid instance, it's 
going to do a Base64 encoding on the generated UUID this way:
{code:java}
Base64.getUrlEncoder().withoutPadding().encodeToString(getBytesFromUuid()); 
{code}
Not sure why, but the code is using an URL (safe) encoder which, taking a look 
at the Base64 class in Java, is using a RFC4648_URLSAFE encoder using the 
following alphabet:
 
{code:java}
private static final char[] toBase64URL = new char[]{'A', 'B', 'C', 'D', 'E', 
'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 
'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 
'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', 
'1', '2', '3', '4', '5', '6', '7', '8', '9', '-', '_'}; {code}
which as you can see includes the "-" character.
So despite the current Uuid.randomUuid is avoiding the generation of a UUID 
containing a "-", the Base64 encoded result can contain a "-" instead 
eventually.
 
I was wondering if there is any good reason for using a Base64 URL encoder and 
not just the RFC4648 (not URL safe) which uses the common Base64 alphabet not 
containing the "-".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15689) KRaftMigrationDriver not logging the skipped event when expected state is wrong

2023-10-26 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-15689:
--

 Summary: KRaftMigrationDriver not logging the skipped event when 
expected state is wrong
 Key: KAFKA-15689
 URL: https://issues.apache.org/jira/browse/KAFKA-15689
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Paolo Patierno
Assignee: Paolo Patierno


The KRaftMigrationDriver.checkDriverState is used in multiple implementations 
of the 
MigrationEvent base class but when it comes to log that an event was skipped 
because the expected state is wrong, it always log "KRafrMigrationDriver" 
instead of the skipped event.
This is because its code has something like this:
 
{code:java}
log.info("Expected driver state {} but found {}. Not running this event {}.",
expectedState, migrationState, this.getClass().getSimpleName()); {code}
Of course, the "this" is referring to the KRafrMigrationDriver class.
It should print the specific skipped event instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14883) Broker state should be "observer" in KRaft quorum

2023-04-07 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-14883:
--

 Summary: Broker state should be "observer" in KRaft quorum
 Key: KAFKA-14883
 URL: https://issues.apache.org/jira/browse/KAFKA-14883
 Project: Kafka
  Issue Type: Improvement
  Components: kraft, metrics
Affects Versions: 3.4.0
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Currently, the `current-state` KRaft related metric reports `follower` state 
for a broker while technically it should be reported as an `observer`  as the 
`kafka-metadata-quorum` tool does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14411) Logging warning when partitions don't exist on assign request

2022-11-21 Thread Paolo Patierno (Jira)
Paolo Patierno created KAFKA-14411:
--

 Summary: Logging warning when partitions don't exist on assign 
request
 Key: KAFKA-14411
 URL: https://issues.apache.org/jira/browse/KAFKA-14411
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Paolo Patierno


{{When using the assign method on a consumer providing a non existing topic 
(and the Kafka cluster has no auto-creation enabled), the log shows messages 
like.}}
{code:java}
Subscribed to partition(s): not-existing-topic-1
Error while fetching metadata with correlation id 3 : 
{not-existing-topic=UNKNOWN_TOPIC_OR_PARTITION}{code}
{{which could make sense if at some point the user create the topic and the 
consumer will be subscribed to it.}}

{{Different is when the topic exists but not the partition requested by the 
consumer.}}
{code:java}
Subscribed to partition(s): existing-topic-1 {code}
{{The above message shows that the consumer is subscribed but it will start to 
get messages only when the partition will be created as well. Anyway, the log 
could be misleading not printing at least a WARNING that the requested 
partition doesn't exist.}}

{{So, as we have an error on fetching metadata logged when topic not exist (no 
auto-creation enabled), it could be useful to have WARNING messages in the log 
about not existing requested partitions.}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-5588) ConsoleConsumer : uselss --new-consumer option

2017-09-26 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno reopened KAFKA-5588:
---

> ConsoleConsumer : uselss --new-consumer option
> --
>
> Key: KAFKA-5588
> URL: https://issues.apache.org/jira/browse/KAFKA-5588
> Project: Kafka
>  Issue Type: Bug
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Minor
>
> Hi,
> it seems to me that the --new-consumer option on the ConsoleConsumer is 
> useless.
> The useOldConsumer var is related to specify --zookeeper on the command line 
> but then the bootstrap-server option (or the --new-consumer) can't be 
> used.
> If you use --bootstrap-server option then the new consumer is used 
> automatically so no need for --new-consumer.
> It turns out the using the old or new consumer is just related on using 
> --zookeeper or --bootstrap-server option (which can't be used together, so I 
> can't use new consumer connecting to zookeeper).
> It's also clear when you use --zookeeper for the old consumer and the output 
> from help says :
> "Consider using the new consumer by passing [bootstrap-server] instead of 
> [zookeeper]"
> I'm going to remove the --new-consumer option from the tool.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5925) Adding records deletion operation to the new Admin Client API

2017-09-18 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5925:
-

 Summary: Adding records deletion operation to the new Admin Client 
API
 Key: KAFKA-5925
 URL: https://issues.apache.org/jira/browse/KAFKA-5925
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Minor


Hi,
The 
[KIP-107|https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore%28%29+API+in+AdminClient]
 provides a way to delete messages starting from a specified offset inside a 
topic partition which we don’t want to take anymore so without relying on 
time-based and size-based log retention policies. The already implemented 
protocol request and response messages (DeleteRecords API, key 21) are used 
only by the “legacy” Admin Client in Scala and aren’t provided by the new Admin 
Client API in Java.

The 
[KIP-204|https://cwiki.apache.org/confluence/display/KAFKA/KIP-204+%3A+adding+records+deletion+operation+to+the+new+Admin+Client+API]
 is about addressing this JIRA.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5919) Delete records command "version" parameter ignored

2017-09-18 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5919:
-

 Summary: Delete records command "version" parameter ignored
 Key: KAFKA-5919
 URL: https://issues.apache.org/jira/browse/KAFKA-5919
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Minor


Hi,
the kafka-delete-records script allows user to pass information about records 
to delete through a JSON file. Such file, as described in the command help, is 
made by a "partitions" array and a "version" field. Reading 
[KIP-107|https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore%28%29+API+in+AdminClient]
 and the DeleteRecords API (Key: 21) description it's not clear what such field 
is and even it's not used at all (in the current implementation).
I'm going to remove it from tool help description and it should not need a KIP 
because today it's just ignored and even using a JSON file without "version" 
the tool just works.
[~lindong] you implemented such delete command, are my considerations right ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5739) Rewrite KStreamPeekTest at processor level avoiding driver usage

2017-08-16 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5739:
-

 Summary: Rewrite KStreamPeekTest at processor level avoiding 
driver usage
 Key: KAFKA-5739
 URL: https://issues.apache.org/jira/browse/KAFKA-5739
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Minor


Hi,
as already done for the {{KStreamPrintTest}} we could remove the usage of 
{{KStreamTestDriver}} even in the {{KStreamPeekTest}} and testing it at 
processor level not at stream level.
My proposal is to :

* create the {{KStreamPeek}} instance providing the action which fill a 
collection as already happens today
* testing for both {{forwardDownStream}} values true and false
* using the {{MockProcessorContext}} class for overriding the {{forward}} 
method filling a streamObserved collection as happens today 
{{forwardDownStream}} is true; checking that the {{forward}} isn't called when 
{{forwardDownStream}} is false (so the test fails)

Thanks,
Paolo 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5684) KStreamPrintProcessor as customized KStreamPeekProcessor

2017-08-16 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno resolved KAFKA-5684.
---
Resolution: Feedback Received

> KStreamPrintProcessor as customized KStreamPeekProcessor
> 
>
> Key: KAFKA-5684
> URL: https://issues.apache.org/jira/browse/KAFKA-5684
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Minor
>
> Hi,
> the {{KStreamPrintProcessor}} is implemented from scratch (from the 
> {{AbstractProcessor}}) and the same for the related supplier.
> It looks to me that it's just a special {{KStreamPeekProcessor}} with 
> forwardDownStream to false and that allows the possibility to specify Serdes 
> instances used if key/values are bytes.
> At same time used by a {{print()}} method it provides a fast way to print 
> data flowing through the pipeline (while using just {{peek()}} you need to 
> write the code).
> I think that it could be useful to refactoring the {{KStreamPrintProcessor}} 
> as derived from the {{KStreamPeekProcessor}} customizing its behavior.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5684) KStreamPrintProcessor as customized KStreamPeekProcessor

2017-08-01 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5684:
-

 Summary: KStreamPrintProcessor as customized KStreamPeekProcessor
 Key: KAFKA-5684
 URL: https://issues.apache.org/jira/browse/KAFKA-5684
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Minor


Hi,
the {{KStreamPrintProcessor}} is implemented from scratch (from the 
{{AbstractProcessor}}) and the same for the related supplier.
It looks to me that it's just a special {{KStreamPeekProcessor}} with 
forwardDownStream to false and that allows the possibility to specify Serdes 
instances used if key/values are bytes.
At same time used by a {{print()}} method it provides a fast way to print data 
flowing through the pipeline (while using just {{peek()}} you need to write the 
code).
I think that it could be useful to refactoring the {{KStreamPrintProcessor}} as 
derived from the {{KStreamPeekProcessor}} customizing its behavior.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5643) Using _DUCKTAPE_OPTIONS has no effect on executing tests

2017-07-26 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5643:
-

 Summary: Using _DUCKTAPE_OPTIONS has no effect on executing tests
 Key: KAFKA-5643
 URL: https://issues.apache.org/jira/browse/KAFKA-5643
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Hi,
as described in the documentation, you should be able to enable debugging using 
the following line :

_DUCKTAPE_OPTIONS="--debug" bash tests/docker/run_tests.sh | tee debug_logs.txt

Instead the _DUCKTAPE_OPTIONS isn't available in the run_tests.sh script so 
it's not passed to the ducker-ak and finally on the ducktape command line.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5619) Make --new-consumer option as deprecated in all tools

2017-07-20 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5619:
-

 Summary: Make --new-consumer option as deprecated in all tools
 Key: KAFKA-5619
 URL: https://issues.apache.org/jira/browse/KAFKA-5619
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Hi,
as already described by the [https://issues.apache.org/jira/browse/KAFKA-5599], 
it's usefull to mark as deprecated the new-consumer option for all the other 
tools which use it (ConsumerPerformance and ConsumerGroupCommand). It will be 
available for the next major release for then moving to remove the option in 
the subsequent release cycle.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5588) ConsumerConsole : uselss --new-consumer option

2017-07-17 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno resolved KAFKA-5588.
---
Resolution: Later

Before removing the --new-consumer option, the first step is to mark it as 
deprecated in the next release. It will be removed in a new release cycle 
(after deprecation).

> ConsumerConsole : uselss --new-consumer option
> --
>
> Key: KAFKA-5588
> URL: https://issues.apache.org/jira/browse/KAFKA-5588
> Project: Kafka
>  Issue Type: Bug
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Minor
>
> Hi,
> it seems to me that the --new-consumer option on the ConsoleConsumer is 
> useless.
> The useOldConsumer var is related to specify --zookeeper on the command line 
> but then the bootstrap-server option (or the --new-consumer) can't be 
> used.
> If you use --bootstrap-server option then the new consumer is used 
> automatically so no need for --new-consumer.
> It turns out the using the old or new consumer is just related on using 
> --zookeeper or --bootstrap-server option (which can't be used together, so I 
> can't use new consumer connecting to zookeeper).
> It's also clear when you use --zookeeper for the old consumer and the output 
> from help says :
> "Consider using the new consumer by passing [bootstrap-server] instead of 
> [zookeeper]"
> I'm going to remove the --new-consumer option from the tool.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5599) ConsumerConsole : --new-consumer option as deprecated

2017-07-17 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5599:
-

 Summary: ConsumerConsole : --new-consumer option as deprecated
 Key: KAFKA-5599
 URL: https://issues.apache.org/jira/browse/KAFKA-5599
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Hi,
it seems to me that the --new-consumer option on the ConsoleConsumer is useless.
The useOldConsumer var is related to specify --zookeeper on the command line 
but then the bootstrap-server option (or the --new-consumer) can't be used.
If you use --bootstrap-server option then the new consumer is used 
automatically so no need for --new-consumer.
It turns out the using the old or new consumer is just related on using 
--zookeeper or --bootstrap-server option (which can't be used together, so I 
can't use new consumer connecting to zookeeper).
It's also clear when you use --zookeeper for the old consumer and the output 
from help says :
"Consider using the new consumer by passing [bootstrap-server] instead of 
[zookeeper]"

Before removing the --new-consumer option, this JIRA is for marking it as 
deprecated in the next release (then moving for a new release on removing such 
option).

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5588) ConsumerConsole : uselss --new-consumer option

2017-07-13 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5588:
-

 Summary: ConsumerConsole : uselss --new-consumer option
 Key: KAFKA-5588
 URL: https://issues.apache.org/jira/browse/KAFKA-5588
 Project: Kafka
  Issue Type: Bug
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Minor


Hi,
it seems to me that the --new-consumer option on the ConsoleConsumer is useless.
The useOldConsumer var is related to specify --zookeeper on the command line 
but then the bootstrap-server option (or the --new-consumer) can't be used.
If you use --bootstrap-server option then the new consumer is used 
automatically so no need for --new-consumer.
It turns out the using the old or new consumer is just related on using 
--zookeeper or --bootstrap-server option (which can't be used together, so I 
can't use new consumer connecting to zookeeper).
I'm going to remove the --new-consumer option from the tool.

Thanks,
Paolo.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5561) Rewrite TopicCommand using the new Admin client

2017-07-06 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5561:
-

 Summary: Rewrite TopicCommand using the new Admin client
 Key: KAFKA-5561
 URL: https://issues.apache.org/jira/browse/KAFKA-5561
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Hi, 

as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it could 
be great to have the TopicCommand using the new Admin client instead of the way 
it works today.
As pushed by [~gwenshap] in the above JIRA, I'm going to work on it.

Thanks,
Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5557) Using a logPrefix inside the StreamPartitionAssignor

2017-07-05 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5557:
-

 Summary: Using a logPrefix inside the StreamPartitionAssignor
 Key: KAFKA-5557
 URL: https://issues.apache.org/jira/browse/KAFKA-5557
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Trivial


Hi,
the "stream-thread [%s]" is replicated more times in all the logging messages 
inside the StreamPartitionAssignor. Using a logPrefix like for the StreamThread 
class could be better.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5536) Tools splitted between Java and Scala implementation

2017-06-29 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5536:
-

 Summary: Tools splitted between Java and Scala implementation
 Key: KAFKA-5536
 URL: https://issues.apache.org/jira/browse/KAFKA-5536
 Project: Kafka
  Issue Type: Wish
Reporter: Paolo Patierno


Hi,
is there any specific reason why tools are splitted between Java and Scala 
implementations ?
Maybe it could be better having only one language for all of them.
What do you think ?

Thanks,
Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5532) Making bootstrap.servers property a first citizen option for the ProducerPerformance

2017-06-28 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5532:
-

 Summary: Making bootstrap.servers property a first citizen option 
for the ProducerPerformance
 Key: KAFKA-5532
 URL: https://issues.apache.org/jira/browse/KAFKA-5532
 Project: Kafka
  Issue Type: Improvement
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Trivial


Hi,

using the ProducerPerformance tool you have to specify the bootstrap.servers 
option using the producer-props or producer-config option. It could be better 
having bootstrap.servers as a first citizen option like all the other tools, so 
a dedicate --bootstrap-servers option.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5525) Streams reset tool should have same console output with or without dry-run

2017-06-27 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5525:
-

 Summary: Streams reset tool should have same console output with 
or without dry-run
 Key: KAFKA-5525
 URL: https://issues.apache.org/jira/browse/KAFKA-5525
 Project: Kafka
  Issue Type: Improvement
Reporter: Paolo Patierno
Priority: Minor


Hi,
I see that the Streams reset tool provides a console output a little bit 
different when you execute it using "dry-run" (so without executing any real 
action) or without it.
With dry-run :

{code}
Dry run displays the actions which will be performed when running Streams 
Reset Tool
Following input topics offsets will be reset to beginning (for consumer group 
streams-wordcount)
Topic: streams-file-input
Done.
Deleting all internal/auto-created topics for application streams-wordcount
Topic: streams-wordcount-Counts-repartition
Topic: streams-wordcount-Counts-changelog
Done.
{code}

without dry-run :

{code}
Seek-to-beginning for input topics [streams-file-input]
Done.
Deleting all internal/auto-created topics for application streams-wordcount
Topic streams-wordcount-Counts-repartition is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
Topic streams-wordcount-Counts-changelog is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
Done.
{code}

I think that the version with dry-run related to show "Seek-to-beginning for 
input topics [streams-file-input]" could be used even for version without 
dry-run.
The output should be consistent and the only difference should be on executing 
real actions or not.
I'm working on a trivial PR for a proposal.

Thanks,
Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5523) ReplayLogProducer not using the new Kafka consumer

2017-06-27 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5523:
-

 Summary: ReplayLogProducer not using the new Kafka consumer
 Key: KAFKA-5523
 URL: https://issues.apache.org/jira/browse/KAFKA-5523
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Paolo Patierno
Priority: Minor


Hi,

the ReplayLogProducer is using the latest Kafka producer but not the latest 
Kafka consumer. Is this tool today deprecated ? I see that something like that 
could be done using the MirrorMaker. [~ijuma] Does it make sense to update the 
ReplayLogProducer to the latest Kafka consumer ?

Thanks,
Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5516) Formatting verifiable producer/consumer output in a similar fashion

2017-06-26 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5516:
-

 Summary: Formatting verifiable producer/consumer output in a 
similar fashion
 Key: KAFKA-5516
 URL: https://issues.apache.org/jira/browse/KAFKA-5516
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Trivial


Hi,
following the proposal to have verifiable producer/consumer providing a very 
similar output where the "timestamp" is always the first column followed by 
"name" event and then all the specific data for such event.
It includes a verifiable producer refactoring for having that in the same way 
as verifiable consumer.

Thanks,
Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5506) bin/kafka-consumer-groups.sh failing to query offsets

2017-06-23 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno resolved KAFKA-5506.
---
Resolution: Not A Problem

> bin/kafka-consumer-groups.sh failing to query offsets
> -
>
> Key: KAFKA-5506
> URL: https://issues.apache.org/jira/browse/KAFKA-5506
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
> Environment: Linux slfd06 4.4.0-78-generic #99~14.04.2-Ubuntu SMP Thu 
> Apr 27 18:49:46 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Yousef Amar
>  Labels: features, newbie
>
> When I found that {{bin/kafka-consumer-offset-checker.sh}} was deprecated and 
> didn't work, I checked the docs and ran the following instead (using new 
> consumer, and coordinator vs zookeeper):
> {code:java}
> bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 
> --describe --group test
> {code}
> I kept getting a NullPointerException though (line numbers are a bit off 
> because of debug code):
> {code:java}
> java.lang.NullPointerException
> at org.apache.kafka.common.utils.Utils.join(Utils.java:399)
> at 
> org.apache.kafka.common.requests.OffsetFetchRequest$Builder.toString(OffsetFetchRequest.java:77)
> at java.lang.String.valueOf(String.java:2994)
> at java.lang.StringBuilder.append(StringBuilder.java:131)
> at 
> org.apache.kafka.clients.ClientRequest.toString(ClientRequest.java:65)
> at 
> org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:375)
> at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:332)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:409)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:252)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:199)
> at kafka.admin.AdminClient$$anon$1.run(AdminClient.scala:61)
> at java.lang.Thread.run(Thread.java:748)
> Error: Executing consumer group command failed due to The server experienced 
> an unexpected error when processing the request
> {code}
> I tracked this down to the following. The request builder that is 
> instantiated 
> [here|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/AdminClient.scala#L198]
>  has its {{partitions}} set to {{ALL_TOPIC_PARTITIONS}} which is null (v2 or 
> newer to request all topic partitions). Later, [when sending the 
> request|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/NetworkClient.java#L375],
>  it's converted to a string. But {{partitions}} above can only be null when 
> built that way, so 
> [this|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java#L76]
>  throws the exception.
> I'm quite new to Kafka, so I'm still not entirely sure if I'm doing something 
> wrong or if this is indeed a bug. As such, any pointers or advice would be 
> much appreciated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5481) ListOffsetResponse isn't logged in the right way with trace level enabled

2017-06-20 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5481:
-

 Summary: ListOffsetResponse isn't logged in the right way with 
trace level enabled
 Key: KAFKA-5481
 URL: https://issues.apache.org/jira/browse/KAFKA-5481
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: Paolo Patierno
Assignee: Paolo Patierno


Hi,
when trace level is enabled, the ListOffsetResponse isn't logged well but just 
the class name is showed in the log  :

{code}
[2017-06-20 14:18:50,724] TRACE Received ListOffsetResponse 
org.apache.kafka.common.requests.ListOffsetResponse@7ed5ecd9 from broker 
new-host:9092 (id: 0 rack: null) 
(org.apache.kafka.clients.consumer.internals.Fetcher:674)
{code}

The class doesn't provide a toString() for such a thing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5469) Created state changelog topics not logged correctly

2017-06-19 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5469:
-

 Summary: Created state changelog topics not logged correctly
 Key: KAFKA-5469
 URL: https://issues.apache.org/jira/browse/KAFKA-5469
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Paolo Patierno
Assignee: Paolo Patierno


In the StreamPartitionAssignor class, the created state changelog topics aren't 
not logged correctly if the DEBUG log level is set.

[2017-06-19 12:18:44,186] DEBUG stream-thread 
[streams-temperature-6d25c7ff-0927-4469-8ab3-2685e7d8-StreamThread-1] 
Created state changelog topics 
{streams-temperature-KSTREAM-REDUCE-STATE-STORE-02-changelog=org.apache.kafka.streams.processor.internals.StreamPartitionAssignor$InternalTopicMetadata@65ae7693}
 from the parsed topology. 
(org.apache.kafka.streams.processor.internals.StreamPartitionAssignor:477)

against repartition topics which are well logged :

[2017-06-19 12:18:37,871] DEBUG stream-thread 
[streams-temperature-6d25c7ff-0927-4469-8ab3-2685e7d8-StreamThread-1] 
Created repartition topics [Partition(topic = 
streams-temperature-KSTREAM-REDUCE-STATE-STORE-02-repartition, 
partition = 0, leader = none, replicas = [], isr = [])] from the parsed 
topology. 
(org.apache.kafka.streams.processor.internals.StreamPartitionAssignor:402)

At same time if source topics are not created before launching the stream 
application, the state changelog topics log shows just {} (the placeholder) 
while for repartition topics the [] that is right because it's an empty list.

It seems that there are two problems. For state changelog topics the values() 
is not used but then InternalTopicMetadata hasn't a toString() for having a 
well formatted output.
I'm already working on that ..



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5426) One second delay in kafka-console-producer

2017-06-16 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno resolved KAFKA-5426.
---
Resolution: Not A Bug

> One second delay in kafka-console-producer
> --
>
> Key: KAFKA-5426
> URL: https://issues.apache.org/jira/browse/KAFKA-5426
> Project: Kafka
>  Issue Type: Bug
>  Components: config, producer 
>Affects Versions: 0.10.2.1
> Environment: Ubuntu 16.04 with OpenJDK-8 and Ubuntu 14.04 with 
> OpenJDK-7
>Reporter: Francisco Robles Martin
>Assignee: Paolo Patierno
>Priority: Minor
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Hello!
> I have been trying to change the default delay for the original 
> kafka-console-producer with both adding the producer.properties with a 
> different configuration  for linger.ms and batch.size, and also providing it 
> directly in the command line with "--property" but nothing works. 
> I have also tried it in a VM with Ubuntu 14.04 and using 0.8.2.1 Kafka 
> version but I have had the same result. I don't know if it has been designed 
> like that to don't be able to change the behaviour of the console-producer or 
> if this is a bug.
> Here you can see my original post in StackOverFlow asking for help in this 
> issue: 
> https://stackoverflow.com/questions/44334304/kafka-spark-streaming-constant-delay-of-1-second



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-15 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050917#comment-16050917
 ] 

Paolo Patierno commented on KAFKA-5434:
---

[~vahid] I got your point. Maybe we could have an option as --waiting-partition 
(or the opposite --check-partition-exists) in order to have both behaviours. 
Wdyt ? Hoping someone else jump into this.

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5454) Add a new Kafka Streams example IoT oriented

2017-06-15 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5454:
--
Status: Patch Available  (was: Open)

> Add a new Kafka Streams example IoT oriented
> 
>
> Key: KAFKA-5454
> URL: https://issues.apache.org/jira/browse/KAFKA-5454
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Trivial
>
> Hi,
> I had the doubt to open a JIRA or not for this but I have a PR with an 
> example of using Kafka Streams in a simple IoT scenario using "tumbling" 
> window for processing maximum temperature value in the latest 5 seconds and 
> sending an "alarm" if it's over 20.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5412) Using connect-console-sink/source.properties raises an exception related to "file" property not found

2017-06-15 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno reassigned KAFKA-5412:
-

Assignee: Paolo Patierno

> Using connect-console-sink/source.properties raises an exception related to 
> "file" property not found
> -
>
> Key: KAFKA-5412
> URL: https://issues.apache.org/jira/browse/KAFKA-5412
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
> Fix For: 0.11.1.0
>
>
> With the latest 0.11.1.0-SNAPSHOT it happens that the Kafka Connect example 
> using connect-console-sink/source.properties doesn't work anymore because the 
> needed "file" property isn't found. This is because the underlying used 
> FileStreamSink/Source connector and task has defined a ConfigDef with "file" 
> as mandatory parameter. In the case of console example we want to have 
> file=null so that stdin and stdout are used. 
> One possible solution and workaround is set "file=" inside the provided 
> connect-console-sink/source.properties. The other one could be modify the 
> FileStreamSink/Source source code in order to remove the "file" definition 
> from the ConfigDef.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5409) Providing a custom client-id to the ConsoleProducer tool

2017-06-15 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno reassigned KAFKA-5409:
-

Assignee: Paolo Patierno

> Providing a custom client-id to the ConsoleProducer tool
> 
>
> Key: KAFKA-5409
> URL: https://issues.apache.org/jira/browse/KAFKA-5409
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Minor
>
> Hi,
> I see that the client-id properties for the ConsoleProducer tool is always 
> "console-producer". It could be useful having it as parameter on the command 
> line or generating a random one like happens for the ConsolerConsumer.
> If it makes sense to you, I can work on that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5410) Fix taskClass() method name in Connector and flush() signature in SinkTask

2017-06-15 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno reassigned KAFKA-5410:
-

Assignee: Paolo Patierno

> Fix taskClass() method name in Connector and flush() signature in SinkTask
> --
>
> Key: KAFKA-5410
> URL: https://issues.apache.org/jira/browse/KAFKA-5410
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>
> Hi,
> the current documentation refers to getTaskClass() for the Connector class 
> during the file example. At same time, a different signature is showed for 
> the flush() method in SinkTask which now has OffsetMetadata as well.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-15 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno reassigned KAFKA-5434:
-

Assignee: Paolo Patierno  (was: Vahid Hashemian)

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5454) Add a new Kafka Streams example IoT oriented

2017-06-15 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5454:
-

 Summary: Add a new Kafka Streams example IoT oriented
 Key: KAFKA-5454
 URL: https://issues.apache.org/jira/browse/KAFKA-5454
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Trivial


Hi,
I had the doubt to open a JIRA or not for this but I have a PR with an example 
of using Kafka Streams in a simple IoT scenario using "tumbling" window for 
processing maximum temperature value in the latest 5 seconds and sending an 
"alarm" if it's over 20.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5446) Annoying braces showed on log.error using streams

2017-06-14 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5446:
--
Status: Patch Available  (was: Open)

> Annoying braces showed on log.error using streams 
> --
>
> Key: KAFKA-5446
> URL: https://issues.apache.org/jira/browse/KAFKA-5446
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Paolo Patierno
>Priority: Trivial
>
> Hi,
> in the stream library seems to be a wrong usage of the log.error method when 
> we want to show an exception. There are useless braces at the end of the line 
> before showing exception information like the following example :
> ERROR task [0_0] Could not close task due to {} 
> (org.apache.kafka.streams.processor.internals.StreamTask:414)
> org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
> completed since the group has already rebalanced and assigned the partitions 
> to another member. This means that the time between subsequent calls to 
> poll() was longer than the configured max.poll.interval.ms, which typically 
> implies that the poll loop is spending too much time message processing. You 
> can address this either by increasing the session timeout or by reducing the 
> maximum size of batches returned in poll() with max.poll.records.
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:725)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:604)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1146)
> as you can see in " due to {}", the braces aren't needed for showing 
> exception info so they are printed.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5446) Annoying braces showed on log.error using streams

2017-06-14 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5446:
-

 Summary: Annoying braces showed on log.error using streams 
 Key: KAFKA-5446
 URL: https://issues.apache.org/jira/browse/KAFKA-5446
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Paolo Patierno
Priority: Trivial


Hi,
in the stream library seems to be a wrong usage of the log.error method when we 
want to show an exception. There are useless braces at the end of the line 
before showing exception information like the following example :

ERROR task [0_0] Could not close task due to {} 
(org.apache.kafka.streams.processor.internals.StreamTask:414)
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be 
completed since the group has already rebalanced and assigned the partitions to 
another member. This means that the time between subsequent calls to poll() was 
longer than the configured max.poll.interval.ms, which typically implies that 
the poll loop is spending too much time message processing. You can address 
this either by increasing the session timeout or by reducing the maximum size 
of batches returned in poll() with max.poll.records.
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:725)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:604)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1146)

as you can see in " due to {}", the braces aren't needed for showing 
exception info so they are printed.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5410) Fix taskClass() method name in Connector and flush() signature in SinkTask

2017-06-14 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5410:
--
Status: Patch Available  (was: Open)

> Fix taskClass() method name in Connector and flush() signature in SinkTask
> --
>
> Key: KAFKA-5410
> URL: https://issues.apache.org/jira/browse/KAFKA-5410
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Paolo Patierno
>
> Hi,
> the current documentation refers to getTaskClass() for the Connector class 
> during the file example. At same time, a different signature is showed for 
> the flush() method in SinkTask which now has OffsetMetadata as well.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5409) Providing a custom client-id to the ConsoleProducer tool

2017-06-14 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5409:
--
Status: Patch Available  (was: Open)

> Providing a custom client-id to the ConsoleProducer tool
> 
>
> Key: KAFKA-5409
> URL: https://issues.apache.org/jira/browse/KAFKA-5409
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> I see that the client-id properties for the ConsoleProducer tool is always 
> "console-producer". It could be useful having it as parameter on the command 
> line or generating a random one like happens for the ConsolerConsumer.
> If it makes sense to you, I can work on that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-14 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5434:
--
Status: Patch Available  (was: Open)

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Vahid Hashemian
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-13 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048054#comment-16048054
 ] 

Paolo Patierno commented on KAFKA-5434:
---

Thank you very much. I'll push a PR for that.

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Vahid Hashemian
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-13 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048035#comment-16048035
 ] 

Paolo Patierno commented on KAFKA-5434:
---

[~vahid] I can't do that now :(
I asked for being added to the contributors list but I'm not part of it yet.

Thanks
Paolo

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Vahid Hashemian
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-12 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16047462#comment-16047462
 ] 

Paolo Patierno commented on KAFKA-5434:
---

Hi [~vahid] 
is it possible assigning to me this JIRA because I was already working on that 
for a proposal PR ? 
Thanks
Paolo

> Console consumer hangs if not existing partition is specified
> -
>
> Key: KAFKA-5434
> URL: https://issues.apache.org/jira/browse/KAFKA-5434
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Vahid Hashemian
>
> Hi,
> if I specify the --partition option for the console consumer with a not 
> existing partition for a topic, the application hangs indefinitely.
> Debugging the code I see that it asks for metadata but when it receives topic 
> information and it doesn't find the requested partition inside such metadata, 
> the code retries new time.
> Could be it worst to check if the partition exists using the partitionFor 
> method before calling the assign in the seek of the BaseConsumer and throwing 
> an exception so printing an error on the console ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5434) Console consumer hangs if not existing partition is specified

2017-06-12 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5434:
-

 Summary: Console consumer hangs if not existing partition is 
specified
 Key: KAFKA-5434
 URL: https://issues.apache.org/jira/browse/KAFKA-5434
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Paolo Patierno


Hi,
if I specify the --partition option for the console consumer with a not 
existing partition for a topic, the application hangs indefinitely.
Debugging the code I see that it asks for metadata but when it receives topic 
information and it doesn't find the requested partition inside such metadata, 
the code retries new time.
Could be it worst to check if the partition exists using the partitionFor 
method before calling the assign in the seek of the BaseConsumer and throwing 
an exception so printing an error on the console ?

Thanks,
Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5426) One second delay in kafka-console-producer

2017-06-12 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046495#comment-16046495
 ] 

Paolo Patierno commented on KAFKA-5426:
---

Hi,
the linger.ms parameter is set using the `--timeout` option on the command line 
which if not specified is 1000 ms.
At same time the batch.size parameter is set using the 
`--max-partition-memory-bytes` option on the command line which if not 
specified is 16384.
It means that even if you specify linger.ms and batch.size using 
--producer-property or --producer.config, they will be always overwritten by 
the above "specific" options.

> One second delay in kafka-console-producer
> --
>
> Key: KAFKA-5426
> URL: https://issues.apache.org/jira/browse/KAFKA-5426
> Project: Kafka
>  Issue Type: Bug
>  Components: config, producer 
>Affects Versions: 0.10.2.1
> Environment: Ubuntu 16.04 with OpenJDK-8 and Ubuntu 14.04 with 
> OpenJDK-7
>Reporter: Francisco
>Priority: Minor
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> Hello!
> I have been trying to change the default delay for the original 
> kafka-console-producer with both adding the producer.properties with a 
> different configuration  for linger.ms and batch.size, and also providing it 
> directly in the command line with "--property" but nothing works. 
> I have also tried it in a VM with Ubuntu 14.04 and using 0.8.2.1 Kafka 
> version but I have had the same result. I don't know if it has been designed 
> like that to don't be able to change the behaviour of the console-producer or 
> if this is a bug.
> Here you can see my original post in StackOverFlow asking for help in this 
> issue: 
> https://stackoverflow.com/questions/44334304/kafka-spark-streaming-constant-delay-of-1-second



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5412) Using connect-console-sink/source.properties raises an exception related to "file" property not found

2017-06-12 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046325#comment-16046325
 ] 

Paolo Patierno commented on KAFKA-5412:
---

Sorry [~rhauch], on what version do you mean to backport this fix ?

> Using connect-console-sink/source.properties raises an exception related to 
> "file" property not found
> -
>
> Key: KAFKA-5412
> URL: https://issues.apache.org/jira/browse/KAFKA-5412
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Paolo Patierno
> Fix For: 0.11.1.0
>
>
> With the latest 0.11.1.0-SNAPSHOT it happens that the Kafka Connect example 
> using connect-console-sink/source.properties doesn't work anymore because the 
> needed "file" property isn't found. This is because the underlying used 
> FileStreamSink/Source connector and task has defined a ConfigDef with "file" 
> as mandatory parameter. In the case of console example we want to have 
> file=null so that stdin and stdout are used. 
> One possible solution and workaround is set "file=" inside the provided 
> connect-console-sink/source.properties. The other one could be modify the 
> FileStreamSink/Source source code in order to remove the "file" definition 
> from the ConfigDef.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5412) Using connect-console-sink/source.properties raises an exception related to "file" property not found

2017-06-09 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5412:
--
Status: Patch Available  (was: Open)

> Using connect-console-sink/source.properties raises an exception related to 
> "file" property not found
> -
>
> Key: KAFKA-5412
> URL: https://issues.apache.org/jira/browse/KAFKA-5412
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Paolo Patierno
> Fix For: 0.11.1.0
>
>
> Hi,
> with the latest 0.11.1.0-SNAPSHOT it happens that the Kafka Connect example 
> using connect-console-sink/source.properties doesn't work anymore because the 
> needed "file" property isn't found.
> This is because the underlying used FileStreamSink/Source connector and task 
> has defined a ConfigDef with "file" as mandatory parameter. In the case of 
> console example we want to have file=null so that stdin and stdout are used.
> One possible solution is set "file=" inside the provided 
> connect-console-sink/source.properties.
> The other one could be modify the FileStreamSink/Source source code in order 
> to remove the "file" definition from the ConfigDef.
> What do you think ?
> I can provide a PR for that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5409) Providing a custom client-id to the ConsoleProducer tool

2017-06-09 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044592#comment-16044592
 ] 

Paolo Patierno commented on KAFKA-5409:
---

I think that we can leave without the --client-id option but using the same 
approach as the ConsoleConsumer for the group.id so having the user to specify 
it with --property options but then avoiding to overwrite it. If the client.id 
is not specified then a random one is generated.
I'll provide a PR soon.

> Providing a custom client-id to the ConsoleProducer tool
> 
>
> Key: KAFKA-5409
> URL: https://issues.apache.org/jira/browse/KAFKA-5409
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> I see that the client-id properties for the ConsoleProducer tool is always 
> "console-producer". It could be useful having it as parameter on the command 
> line or generating a random one like happens for the ConsolerConsumer.
> If it makes sense to you, I can work on that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5420) Console producer --key-serializer and --value-serializer are always overwritten by ByteArraySerializer

2017-06-09 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno resolved KAFKA-5420.
---
Resolution: Duplicate

Duplicate of KAFKA-2526

> Console producer --key-serializer and --value-serializer are always 
> overwritten by ByteArraySerializer
> --
>
> Key: KAFKA-5420
> URL: https://issues.apache.org/jira/browse/KAFKA-5420
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> the --key-serializer and --value-serializer options passed to the command 
> line are always overwritten here :
> {code}
> props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArraySerializer")
> props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArraySerializer")
> {code}
> in the getNewProducerProps() method.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-5419) Console consumer --key-deserializer and --value-deserializer are always overwritten by ByteArrayDeserializer

2017-06-09 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno resolved KAFKA-5419.
---
Resolution: Duplicate

Duplicate of KAFKA-2526

> Console consumer --key-deserializer and --value-deserializer are always 
> overwritten by ByteArrayDeserializer
> 
>
> Key: KAFKA-5419
> URL: https://issues.apache.org/jira/browse/KAFKA-5419
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> the --key-deserializer and  --value-deserializer options passed to the 
> command line are always overwritten here :
> {code}
> props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> {code}
> in the getNewConsumerProps() method.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5419) Console consumer --key-deserializer and --value-deserializer are always overwritten by ByteArrayDeserializer

2017-06-09 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044572#comment-16044572
 ] 

Paolo Patierno commented on KAFKA-5419:
---

Yes I think you are right sorry. I'm going to close as "duplicate".

> Console consumer --key-deserializer and --value-deserializer are always 
> overwritten by ByteArrayDeserializer
> 
>
> Key: KAFKA-5419
> URL: https://issues.apache.org/jira/browse/KAFKA-5419
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> the --key-deserializer and  --value-deserializer options passed to the 
> command line are always overwritten here :
> {code}
> props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> {code}
> in the getNewConsumerProps() method.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5420) Console producer --key-serializer and --value-serializer are always overwritten by ByteArraySerializer

2017-06-09 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5420:
--
Description: 
Hi,
the --key-serializer and --value-serializer options passed to the command line 
are always overwritten here :

{code}
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
{code}

in the getNewProducerProps() method.

Thanks,
Paolo.

  was:
Hi,
the --key-serializer and --value-serializer options passed to the command line 
are always overwritten here :

{code}
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
{code}

in the getNewProducerProps() method.

Thanks,
Paolo.


> Console producer --key-serializer and --value-serializer are always 
> overwritten by ByteArraySerializer
> --
>
> Key: KAFKA-5420
> URL: https://issues.apache.org/jira/browse/KAFKA-5420
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> the --key-serializer and --value-serializer options passed to the command 
> line are always overwritten here :
> {code}
> props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArraySerializer")
> props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArraySerializer")
> {code}
> in the getNewProducerProps() method.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5419) Console consumer --key-deserializer and --value-deserializer are always overwritten by ByteArrayDeserializer

2017-06-09 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5419:
--
Description: 
Hi,
the --key-deserializer and  --value-deserializer options passed to the command 
line are always overwritten here :

{code}
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
{code}

in the getNewConsumerProps() method.

Thanks,
Paolo.

  was:
Hi,
the --key-deserializer and  --value-deserializer options passed to the command 
line are always overwritten here :

{code}
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
{code}

Thanks,
Paolo.


> Console consumer --key-deserializer and --value-deserializer are always 
> overwritten by ByteArrayDeserializer
> 
>
> Key: KAFKA-5419
> URL: https://issues.apache.org/jira/browse/KAFKA-5419
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> the --key-deserializer and  --value-deserializer options passed to the 
> command line are always overwritten here :
> {code}
> props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> "org.apache.kafka.common.serialization.ByteArrayDeserializer")
> {code}
> in the getNewConsumerProps() method.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5420) Console producer --key-serializer and --value-serializer are always overwritten by ByteArraySerializer

2017-06-09 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5420:
-

 Summary: Console producer --key-serializer and --value-serializer 
are always overwritten by ByteArraySerializer
 Key: KAFKA-5420
 URL: https://issues.apache.org/jira/browse/KAFKA-5420
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Paolo Patierno
Priority: Minor


Hi,
the --key-serializer and --value-serializer options passed to the command line 
are always overwritten here :

{code}
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArraySerializer")
{code}

in the getNewProducerProps() method.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5419) Console consumer --key-deserializer and --value-deserializer are always overwritten by ByteArrayDeserializer

2017-06-09 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5419:
-

 Summary: Console consumer --key-deserializer and 
--value-deserializer are always overwritten by ByteArrayDeserializer
 Key: KAFKA-5419
 URL: https://issues.apache.org/jira/browse/KAFKA-5419
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Paolo Patierno
Priority: Minor


Hi,
the --key-deserializer and  --value-deserializer options passed to the command 
line are always overwritten here :

{code}
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.ByteArrayDeserializer")
{code}

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-3925) Default log.dir=/tmp/kafka-logs is unsafe

2017-06-09 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044002#comment-16044002
 ] 

Paolo Patierno commented on KAFKA-3925:
---

In my opinion, putting the log in /tmp is great for all developers who start to 
use Kafka or for testing as well. They don't need to remember to delete such 
logs after "playing" around with Kafka. I think that as default is good. If 
people need persistence of such logs, they should change the logs dir. I think 
that just warning on startup could be enough. 

> Default log.dir=/tmp/kafka-logs is unsafe
> -
>
> Key: KAFKA-3925
> URL: https://issues.apache.org/jira/browse/KAFKA-3925
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.10.0.0
> Environment: Various, depends on OS and configuration
>Reporter: Peter Davis
>
> Many operating systems are configured to delete files under /tmp.  For 
> example Ubuntu has 
> [tmpreaper|http://manpages.ubuntu.com/manpages/wily/man8/tmpreaper.8.html], 
> others use tmpfs, others delete /tmp on startup. 
> Defaults are OK to make getting started easier but should not be unsafe 
> (risking data loss). 
> Something under /var would be a better default log.dir under *nix.  Or 
> relative to the Kafka bin directory to avoid needing root.  
> If the default cannot be changed, I would suggest a special warning print to 
> the console on broker startup if log.dir is under /tmp. 
> See [users list 
> thread|http://mail-archives.apache.org/mod_mbox/kafka-users/201607.mbox/%3cCAD5tkZb-0MMuWJqHNUJ3i1+xuNPZ4tnQt-RPm65grxE0=0o...@mail.gmail.com%3e].
>   I've also been bitten by this. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5412) Using connect-console-sink/source.properties raises an exception related to "file" property not found

2017-06-09 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043988#comment-16043988
 ] 

Paolo Patierno commented on KAFKA-5412:
---

Thanks [~rhauch] ! I'll do the PR ...
Btw I already sent the request for being part of the contributors list a couple 
of days ago but no response right now :-(

> Using connect-console-sink/source.properties raises an exception related to 
> "file" property not found
> -
>
> Key: KAFKA-5412
> URL: https://issues.apache.org/jira/browse/KAFKA-5412
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Paolo Patierno
>
> Hi,
> with the latest 0.11.1.0-SNAPSHOT it happens that the Kafka Connect example 
> using connect-console-sink/source.properties doesn't work anymore because the 
> needed "file" property isn't found.
> This is because the underlying used FileStreamSink/Source connector and task 
> has defined a ConfigDef with "file" as mandatory parameter. In the case of 
> console example we want to have file=null so that stdin and stdout are used.
> One possible solution is set "file=" inside the provided 
> connect-console-sink/source.properties.
> The other one could be modify the FileStreamSink/Source source code in order 
> to remove the "file" definition from the ConfigDef.
> What do you think ?
> I can provide a PR for that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5409) Providing a custom client-id to the ConsoleProducer tool

2017-06-08 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043185#comment-16043185
 ] 

Paolo Patierno edited comment on KAFKA-5409 at 6/8/17 6:23 PM:
---

Hi Bharat,
you are right but ... inside the getNewProducerProps() method, the call to 
producerProps(config) gets the "extraProducerProps" and fill the properties 
(i.e. the client.id passed by the --producer-property option) in the right way 
... 

{code}
private def producerProps(config: ProducerConfig): Properties = {
val props =
  if (config.options.has(config.producerConfigOpt))
Utils.loadProps(config.options.valueOf(config.producerConfigOpt))
  else new Properties
props.putAll(config.extraProducerProps)
props
  }
{code}

then ... the client.id is overridden by :

{code}
props.put(ProducerConfig.CLIENT_ID_CONFIG, "console-producer")
{code}

so the --producer-property option loses its effect.


was (Author: ppatierno):
Hi Bharat,
you are right but ... inside the getNewProducerProps() method, the call to 
producerProps(config) gets the "extraProducerProps" and fill the properties 
(i.e. the client.id passed by the --producer-property option) in the right way 
then ... the client.id is overridden by :

{code}
props.put(ProducerConfig.CLIENT_ID_CONFIG, "console-producer")
{code}

so the --producer-property option loses its effect.

> Providing a custom client-id to the ConsoleProducer tool
> 
>
> Key: KAFKA-5409
> URL: https://issues.apache.org/jira/browse/KAFKA-5409
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> I see that the client-id properties for the ConsoleProducer tool is always 
> "console-producer". It could be useful having it as parameter on the command 
> line or generating a random one like happens for the ConsolerConsumer.
> If it makes sense to you, I can work on that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5409) Providing a custom client-id to the ConsoleProducer tool

2017-06-08 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043185#comment-16043185
 ] 

Paolo Patierno commented on KAFKA-5409:
---

Hi Bharat,
you are right but ... inside the getNewProducerProps() method, the call to 
producerProps(config) gets the "extraProducerProps" and fill the properties 
(i.e. the client.id passed by the --producer-property option) in the right way 
then ... the client.id is overridden by :

{code}
props.put(ProducerConfig.CLIENT_ID_CONFIG, "console-producer")
{code}

so the --producer-property option loses its effect.

> Providing a custom client-id to the ConsoleProducer tool
> 
>
> Key: KAFKA-5409
> URL: https://issues.apache.org/jira/browse/KAFKA-5409
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> I see that the client-id properties for the ConsoleProducer tool is always 
> "console-producer". It could be useful having it as parameter on the command 
> line or generating a random one like happens for the ConsolerConsumer.
> If it makes sense to you, I can work on that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5412) Using connect-console-sink/source.properties raises an exception related to "file" property not found

2017-06-08 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16043034#comment-16043034
 ] 

Paolo Patierno commented on KAFKA-5412:
---

Yes I agree this is a  better solution. I'd like to work on that as a newbie 
contributor to the project. Can you add me to the contributor list so that I 
can assign myself to this and eventually other JIRAs.What do you think ?

> Using connect-console-sink/source.properties raises an exception related to 
> "file" property not found
> -
>
> Key: KAFKA-5412
> URL: https://issues.apache.org/jira/browse/KAFKA-5412
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Paolo Patierno
>
> Hi,
> with the latest 0.11.1.0-SNAPSHOT it happens that the Kafka Connect example 
> using connect-console-sink/source.properties doesn't work anymore because the 
> needed "file" property isn't found.
> This is because the underlying used FileStreamSink/Source connector and task 
> has defined a ConfigDef with "file" as mandatory parameter. In the case of 
> console example we want to have file=null so that stdin and stdout are used.
> One possible solution is set "file=" inside the provided 
> connect-console-sink/source.properties.
> The other one could be modify the FileStreamSink/Source source code in order 
> to remove the "file" definition from the ConfigDef.
> What do you think ?
> I can provide a PR for that.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5412) Using connect-console-sink/source.properties raises an exception related to "file" property not found

2017-06-08 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5412:
-

 Summary: Using connect-console-sink/source.properties raises an 
exception related to "file" property not found
 Key: KAFKA-5412
 URL: https://issues.apache.org/jira/browse/KAFKA-5412
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.1.0
Reporter: Paolo Patierno


Hi,
with the latest 0.11.1.0-SNAPSHOT it happens that the Kafka Connect example 
using connect-console-sink/source.properties doesn't work anymore because the 
needed "file" property isn't found.
This is because the underlying used FileStreamSink/Source connector and task 
has defined a ConfigDef with "file" as mandatory parameter. In the case of 
console example we want to have file=null so that stdin and stdout are used.
One possible solution is set "file=" inside the provided 
connect-console-sink/source.properties.
The other one could be modify the FileStreamSink/Source source code in order to 
remove the "file" definition from the ConfigDef.
What do you think ?
I can provide a PR for that.

Thanks,
Paolo.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5410) Fix taskClass() method name in Connector and flush() signature in SinkTask

2017-06-08 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5410:
-

 Summary: Fix taskClass() method name in Connector and flush() 
signature in SinkTask
 Key: KAFKA-5410
 URL: https://issues.apache.org/jira/browse/KAFKA-5410
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Paolo Patierno


Hi,
the current documentation refers to getTaskClass() for the Connector class 
during the file example. At same time, a different signature is showed for the 
flush() method in SinkTask which now has OffsetMetadata as well.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5409) Providing a custom client-id to the ConsoleProducer tool

2017-06-08 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5409:
-

 Summary: Providing a custom client-id to the ConsoleProducer tool
 Key: KAFKA-5409
 URL: https://issues.apache.org/jira/browse/KAFKA-5409
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Paolo Patierno
Priority: Minor


Hi,

I see that the client-id properties for the ConsoleProducer tool is always 
"console-producer". It could be useful having it as parameter on the command 
line or generating a random one like happens for the ConsolerConsumer.
If it makes sense to you, I can work on that.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5408) Using the ConsumerRecord instead of BaseConsumerRecord in the ConsoleConsumer

2017-06-08 Thread Paolo Patierno (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paolo Patierno updated KAFKA-5408:
--
Description: 
Hi,
because the BaseConsumerRecord is marked as deprecated and will be removed in 
future versions, it could worth to start removing its usage in the 
ConsoleConsumer. 
If it makes sense to you, I'd like to work on that starting to contribute to 
the project.

Thanks,
Paolo.

  was:
Hi,
because the BaseConsumerRecord is marked as deprecated and will be removed in 
future versions, it could worth to start removing its usage in the 
ConsoleConsumer. 
If it makes sense for you, I'd like to work on that starting to contribute to 
the project.

Thanks,
Paolo.


> Using the ConsumerRecord instead of BaseConsumerRecord in the ConsoleConsumer
> -
>
> Key: KAFKA-5408
> URL: https://issues.apache.org/jira/browse/KAFKA-5408
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> because the BaseConsumerRecord is marked as deprecated and will be removed in 
> future versions, it could worth to start removing its usage in the 
> ConsoleConsumer. 
> If it makes sense to you, I'd like to work on that starting to contribute to 
> the project.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-5408) Using the ConsumerRecord instead of BaseConsumerRecord in the ConsoleConsumer

2017-06-08 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5408:
-

 Summary: Using the ConsumerRecord instead of BaseConsumerRecord in 
the ConsoleConsumer
 Key: KAFKA-5408
 URL: https://issues.apache.org/jira/browse/KAFKA-5408
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Paolo Patierno
Priority: Minor


Hi,
because the BaseConsumerRecord is marked as deprecated and will be removed in 
future versions, it could worth to start removing its usage in the 
ConsoleConsumer. 
If it makes sense for you, I'd like to work on that starting to contribute to 
the project.

Thanks,
Paolo.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4400) Prefix for sink task consumer groups should be configurable

2017-06-08 Thread Paolo Patierno (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042325#comment-16042325
 ] 

Paolo Patierno commented on KAFKA-4400:
---

[~ewencp] I don't see activity on this since few months. I'd like to start to 
contribute to the Kafka project so this JIRA could be a starting point for 
doing that. What do you think ? Can you add me to the contributors list so that 
I can assign JIRAs to myself ?

Thanks,
Paolo.

> Prefix for sink task consumer groups should be configurable
> ---
>
> Key: KAFKA-4400
> URL: https://issues.apache.org/jira/browse/KAFKA-4400
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Ewen Cheslack-Postava
>  Labels: newbie
>
> Currently the prefix for creating consumer groups is fixed. This means that 
> if you run multiple Connect clusters using the same Kafka cluster and create 
> connectors with the same name, sink tasks in different clusters will join the 
> same group. Making this prefix configurable at the worker level would protect 
> against this.
> An alternative would be to define unique cluster IDs for each connect 
> cluster, which would allow us to construct a unique name for the group 
> without requiring yet another config (but presents something of a 
> compatibility challenge).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)