[jira] [Created] (KAFKA-1831) Producer does not provide any information about which host the data was sent to

2014-12-28 Thread Mark Payne (JIRA)
Mark Payne created KAFKA-1831:
-

 Summary: Producer does not provide any information about which 
host the data was sent to
 Key: KAFKA-1831
 URL: https://issues.apache.org/jira/browse/KAFKA-1831
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 0.8.1.1
Reporter: Mark Payne
Assignee: Jun Rao


For traceability purposes and for troubleshooting, when sending data to Kafka, 
the Producer should provide information about which host the data was sent to. 
This works well already in the SimpleConsumer, which provides host() and port() 
methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2014-12-28 Thread barney (JIRA)
barney created KAFKA-1832:
-

 Summary: Async Producer will cause 'java.net.SocketException: Too 
many open files' when broker host does not exist
 Key: KAFKA-1832
 URL: https://issues.apache.org/jira/browse/KAFKA-1832
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1.1, 0.8.1
 Environment: linux
Reporter: barney
Assignee: Jun Rao


How to replay the problem:
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause 'java.net.SocketException: Too many open files' after a while, or 
you can use 'lsof -p $pid|wc -l' to check the count of open files which will be 
increasing as time goes by until it reaches the system limit(check by 'ulimit 
-n').

Problem cause:
In kafka.network.BlockingChannel, 
'channel.connect(new InetSocketAddress(host, port))'
this line with throw an exception 
'java.nio.channels.UnresolvedAddressException' when broker host does not exist, 
and at this same time the field 'connected' is false;
In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

More:
When the broker is an unexisted ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an unexisted host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

Temporary Solution:
In kafka.network.BlockingChannel:
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2014-12-28 Thread barney (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

barney updated KAFKA-1832:
--
Description: 
How to replay the problem:
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause 'java.net.SocketException: Too many open files' after a while, or 
you can use 'lsof -p $pid|wc -l' to check the count of open files which will be 
increasing as time goes by until it reaches the system limit(check by 'ulimit 
-n').

Problem cause:
In kafka.network.BlockingChannel, 
'channel.connect(new InetSocketAddress(host, port))'
this line will throw an exception 
'java.nio.channels.UnresolvedAddressException' when broker host does not exist, 
and at this same time the field 'connected' is false;
In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

Temporary Solution:
In kafka.network.BlockingChannel:
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}

  was:
How to replay the problem:
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause 'java.net.SocketException: Too many open files' after a while, or 
you can use 'lsof -p $pid|wc -l' to check the count of open files which will be 
increasing as time goes by until it reaches the system limit(check by 'ulimit 
-n').

Problem cause:
In kafka.network.BlockingChannel, 
'channel.connect(new InetSocketAddress(host, port))'
this line with throw an exception 
'java.nio.channels.UnresolvedAddressException' when broker host does not exist, 
and at this same time the field 'connected' is false;
In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

More:
When the broker is an unexisted ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an unexisted host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

Temporary Solution:
In kafka.network.BlockingChannel:
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}


 Async Producer will cause 'java.net.SocketException: Too many open files' 
 when broker host does not exist
 -

 Key: KAFKA-1832
 URL: https://issues.apache.org/jira/browse/KAFKA-1832
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1, 0.8.1.1
 Environment: linux
Reporter: barney
Assignee: Jun Rao

 How to replay the problem:
 1) producer configuration:
 1.1) producer.type=async
 1.2) metadata.broker.list=not.existed.com:9092
 Make sure the host 'not.existed.com' does not exist in DNS server or 
 /etc/hosts;
 2) send a lot of messages continuously using the above producer
 It will cause 'java.net.SocketException: Too many open files' after a while, 
 or you can use 'lsof -p $pid|wc -l' to check the count of open files which 
 will be increasing as time goes by until it reaches the system limit(check by 
 'ulimit -n').
 Problem cause:
 In kafka.network.BlockingChannel, 
 'channel.connect(new InetSocketAddress(host, port))'
 this line will throw an exception 
 'java.nio.channels.UnresolvedAddressException' when broker host does not 
 exist, and at this same time the field 'connected' is false;
 In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
 'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
 which means the FileDescriptor will be created but never closed;
 More:
 When the broker is an non-existent ip(for example: 
 metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the 
 problem will not appear;
 In SocketChannelImpl.connect(), 'Net.checkAddress()' is not 

[jira] [Updated] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2014-12-28 Thread barney (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

barney updated KAFKA-1832:
--
Description: 
*How to replay the problem:*
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause 'java.net.SocketException: Too many open files' after a while, or 
you can use 'lsof -p $pid|wc -l' to check the count of open files which will be 
increasing as time goes by until it reaches the system limit(check by 'ulimit 
-n').

*Problem cause:*
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'java.nio.channels.UnresolvedAddressException' when broker host does not exist, 
and at this same time the field 'connected' is false;
In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

*More:*
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

*Temporary Solution:*
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}

  was:
How to replay the problem:
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause 'java.net.SocketException: Too many open files' after a while, or 
you can use 'lsof -p $pid|wc -l' to check the count of open files which will be 
increasing as time goes by until it reaches the system limit(check by 'ulimit 
-n').

Problem cause:
In kafka.network.BlockingChannel, 
'channel.connect(new InetSocketAddress(host, port))'
this line will throw an exception 
'java.nio.channels.UnresolvedAddressException' when broker host does not exist, 
and at this same time the field 'connected' is false;
In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

Temporary Solution:
In kafka.network.BlockingChannel:
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}


 Async Producer will cause 'java.net.SocketException: Too many open files' 
 when broker host does not exist
 -

 Key: KAFKA-1832
 URL: https://issues.apache.org/jira/browse/KAFKA-1832
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1, 0.8.1.1
 Environment: linux
Reporter: barney
Assignee: Jun Rao

 *How to replay the problem:*
 1) producer configuration:
 1.1) producer.type=async
 1.2) metadata.broker.list=not.existed.com:9092
 Make sure the host 'not.existed.com' does not exist in DNS server or 
 /etc/hosts;
 2) send a lot of messages continuously using the above producer
 It will cause 'java.net.SocketException: Too many open files' after a while, 
 or you can use 'lsof -p $pid|wc -l' to check the count of open files which 
 will be increasing as time goes by until it reaches the system limit(check by 
 'ulimit -n').
 *Problem cause:*
 {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
 channel.connect(new InetSocketAddress(host, port))
 {code}
 this line will throw an exception 
 'java.nio.channels.UnresolvedAddressException' when broker host does not 
 exist, and at this same time the field 'connected' is false;
 In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
 'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
 which means the FileDescriptor will be created but never closed;
 *More:*
 When the broker is an non-existent ip(for example: 
 metadata.broker.list=1.1.1.1:9092) 

[jira] [Updated] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2014-12-28 Thread barney (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

barney updated KAFKA-1832:
--
Description: 
h3.How to replay the problem:
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause '*java.net.SocketException: Too many open files*' after a while, 
or you can use '*lsof -p $pid|wc -l*' to check the count of open files which 
will be increasing as time goes by until it reaches the system limit(check by 
'ulimit -n').

h3.Problem cause:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'*java.nio.channels.UnresolvedAddressException*' when broker host does not 
exist, and at this same time the field '*connected*' is false;
In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

h3.More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

h3.Temporary Solution:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}

  was:
*How to replay the problem:*
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause 'java.net.SocketException: Too many open files' after a while, or 
you can use 'lsof -p $pid|wc -l' to check the count of open files which will be 
increasing as time goes by until it reaches the system limit(check by 'ulimit 
-n').

*Problem cause:*
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'java.nio.channels.UnresolvedAddressException' when broker host does not exist, 
and at this same time the field 'connected' is false;
In kafka.producer.SyncProducer, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

*More:*
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

*Temporary Solution:*
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}


 Async Producer will cause 'java.net.SocketException: Too many open files' 
 when broker host does not exist
 -

 Key: KAFKA-1832
 URL: https://issues.apache.org/jira/browse/KAFKA-1832
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1, 0.8.1.1
 Environment: linux
Reporter: barney
Assignee: Jun Rao

 h3.How to replay the problem:
 1) producer configuration:
 1.1) producer.type=async
 1.2) metadata.broker.list=not.existed.com:9092
 Make sure the host 'not.existed.com' does not exist in DNS server or 
 /etc/hosts;
 2) send a lot of messages continuously using the above producer
 It will cause '*java.net.SocketException: Too many open files*' after a 
 while, or you can use '*lsof -p $pid|wc -l*' to check the count of open files 
 which will be increasing as time goes by until it reaches the system 
 limit(check by 'ulimit -n').
 h3.Problem cause:
 {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
 channel.connect(new InetSocketAddress(host, port))
 {code}
 this line will throw an exception 
 '*java.nio.channels.UnresolvedAddressException*' when broker host does not 
 exist, and at this same time the field '*connected*' is false;
 In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
 'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
 which means the FileDescriptor will be created but never closed;
 

[jira] [Updated] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2014-12-28 Thread barney (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

barney updated KAFKA-1832:
--
Description: 
h3.How to replay the problem:
* producer configuration:
** producer.type=async
** metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
* send a lot of messages continuously using the above producer
It will cause '*java.net.SocketException: Too many open files*' after a while, 
or you can use '*lsof -p $pid|wc -l*' to check the count of open files which 
will be increasing as time goes by until it reaches the system limit(check by 
'ulimit -n').

h3.Problem cause:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'*java.nio.channels.UnresolvedAddressException*' when broker host does not 
exist, and at this same time the field '*connected*' is false;
In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

h3.More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

h3.Temporary Solution:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}

  was:
h3.How to replay the problem:
*producer configuration:
**producer.type=async
**metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
*send a lot of messages continuously using the above producer
It will cause '*java.net.SocketException: Too many open files*' after a while, 
or you can use '*lsof -p $pid|wc -l*' to check the count of open files which 
will be increasing as time goes by until it reaches the system limit(check by 
'ulimit -n').

h3.Problem cause:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'*java.nio.channels.UnresolvedAddressException*' when broker host does not 
exist, and at this same time the field '*connected*' is false;
In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

h3.More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

h3.Temporary Solution:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}


 Async Producer will cause 'java.net.SocketException: Too many open files' 
 when broker host does not exist
 -

 Key: KAFKA-1832
 URL: https://issues.apache.org/jira/browse/KAFKA-1832
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1, 0.8.1.1
 Environment: linux
Reporter: barney
Assignee: Jun Rao

 h3.How to replay the problem:
 * producer configuration:
 ** producer.type=async
 ** metadata.broker.list=not.existed.com:9092
 Make sure the host 'not.existed.com' does not exist in DNS server or 
 /etc/hosts;
 * send a lot of messages continuously using the above producer
 It will cause '*java.net.SocketException: Too many open files*' after a 
 while, or you can use '*lsof -p $pid|wc -l*' to check the count of open files 
 which will be increasing as time goes by until it reaches the system 
 limit(check by 'ulimit -n').
 h3.Problem cause:
 {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
 channel.connect(new InetSocketAddress(host, port))
 {code}
 this line will throw an exception 
 '*java.nio.channels.UnresolvedAddressException*' when broker host does not 
 exist, and at this same time the field '*connected*' is false;
 In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
 'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
 which means the FileDescriptor will be created but never closed;
 h3.More:

[jira] [Updated] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2014-12-28 Thread barney (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

barney updated KAFKA-1832:
--
Description: 
h3.How to replay the problem:
*producer configuration:
**producer.type=async
**metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
*send a lot of messages continuously using the above producer
It will cause '*java.net.SocketException: Too many open files*' after a while, 
or you can use '*lsof -p $pid|wc -l*' to check the count of open files which 
will be increasing as time goes by until it reaches the system limit(check by 
'ulimit -n').

h3.Problem cause:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'*java.nio.channels.UnresolvedAddressException*' when broker host does not 
exist, and at this same time the field '*connected*' is false;
In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

h3.More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

h3.Temporary Solution:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}

  was:
h3.How to replay the problem:
1) producer configuration:
1.1) producer.type=async
1.2) metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
2) send a lot of messages continuously using the above producer
It will cause '*java.net.SocketException: Too many open files*' after a while, 
or you can use '*lsof -p $pid|wc -l*' to check the count of open files which 
will be increasing as time goes by until it reaches the system limit(check by 
'ulimit -n').

h3.Problem cause:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'*java.nio.channels.UnresolvedAddressException*' when broker host does not 
exist, and at this same time the field '*connected*' is false;
In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

h3.More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

h3.Temporary Solution:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}


 Async Producer will cause 'java.net.SocketException: Too many open files' 
 when broker host does not exist
 -

 Key: KAFKA-1832
 URL: https://issues.apache.org/jira/browse/KAFKA-1832
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1, 0.8.1.1
 Environment: linux
Reporter: barney
Assignee: Jun Rao

 h3.How to replay the problem:
 *producer configuration:
 **producer.type=async
 **metadata.broker.list=not.existed.com:9092
 Make sure the host 'not.existed.com' does not exist in DNS server or 
 /etc/hosts;
 *send a lot of messages continuously using the above producer
 It will cause '*java.net.SocketException: Too many open files*' after a 
 while, or you can use '*lsof -p $pid|wc -l*' to check the count of open files 
 which will be increasing as time goes by until it reaches the system 
 limit(check by 'ulimit -n').
 h3.Problem cause:
 {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
 channel.connect(new InetSocketAddress(host, port))
 {code}
 this line will throw an exception 
 '*java.nio.channels.UnresolvedAddressException*' when broker host does not 
 exist, and at this same time the field '*connected*' is false;
 In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
 'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
 which means the FileDescriptor will be created but never closed;
 

[jira] [Updated] (KAFKA-1832) Async Producer will cause 'java.net.SocketException: Too many open files' when broker host does not exist

2014-12-28 Thread barney (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

barney updated KAFKA-1832:
--
Description: 
h3.How to replay the problem:
* producer configuration:
** producer.type=async
** metadata.broker.list=not.existed.com:9092
Make sure the host '*not.existed.com*' does not exist in DNS server or 
/etc/hosts;
* send a lot of messages continuously using the above producer
It will cause '*java.net.SocketException: Too many open files*' after a while, 
or you can use '*lsof -p $pid|wc -l*' to check the count of open files which 
will be increasing as time goes by until it reaches the system limit(check by 
'*ulimit -n*').

h3.Problem cause:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'*java.nio.channels.UnresolvedAddressException*' when broker host does not 
exist, and at this same time the field '*connected*' is false;
In *kafka.producer.SyncProducer*, '*disconnect()*' will not invoke 
'*blockingChannel.disconnect()*' because '*blockingChannel.isConnected*' is 
false which means the FileDescriptor will be created but never closed;

h3.More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In *SocketChannelImpl.connect()*, '*Net.checkAddress()*' is not in try-catch 
block but '*Net.connect()*' is in, that makes the difference;

h3.Temporary Solution:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}

  was:
h3.How to replay the problem:
* producer configuration:
** producer.type=async
** metadata.broker.list=not.existed.com:9092
Make sure the host 'not.existed.com' does not exist in DNS server or /etc/hosts;
* send a lot of messages continuously using the above producer
It will cause '*java.net.SocketException: Too many open files*' after a while, 
or you can use '*lsof -p $pid|wc -l*' to check the count of open files which 
will be increasing as time goes by until it reaches the system limit(check by 
'ulimit -n').

h3.Problem cause:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
channel.connect(new InetSocketAddress(host, port))
{code}
this line will throw an exception 
'*java.nio.channels.UnresolvedAddressException*' when broker host does not 
exist, and at this same time the field '*connected*' is false;
In *kafka.producer.SyncProducer*, 'disconnect()' will not invoke 
'blockingChannel.disconnect()' because 'blockingChannel.isConnected' is false 
which means the FileDescriptor will be created but never closed;

h3.More:
When the broker is an non-existent ip(for example: 
metadata.broker.list=1.1.1.1:9092) instead of an non-existent host, the problem 
will not appear;
In SocketChannelImpl.connect(), 'Net.checkAddress()' is not in try-catch block 
but 'Net.connect()' is in, that makes the difference;

h3.Temporary Solution:
{code:title=kafka.network.BlockingChannel|borderStyle=solid} 
try
{
channel.connect(new InetSocketAddress(host, port))
}
catch
{
case e: UnresolvedAddressException = 
{
disconnect();
throw e
}
}
{code}


 Async Producer will cause 'java.net.SocketException: Too many open files' 
 when broker host does not exist
 -

 Key: KAFKA-1832
 URL: https://issues.apache.org/jira/browse/KAFKA-1832
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.8.1, 0.8.1.1
 Environment: linux
Reporter: barney
Assignee: Jun Rao

 h3.How to replay the problem:
 * producer configuration:
 ** producer.type=async
 ** metadata.broker.list=not.existed.com:9092
 Make sure the host '*not.existed.com*' does not exist in DNS server or 
 /etc/hosts;
 * send a lot of messages continuously using the above producer
 It will cause '*java.net.SocketException: Too many open files*' after a 
 while, or you can use '*lsof -p $pid|wc -l*' to check the count of open files 
 which will be increasing as time goes by until it reaches the system 
 limit(check by '*ulimit -n*').
 h3.Problem cause:
 {code:title=kafka.network.BlockingChannel|borderStyle=solid} 
 channel.connect(new InetSocketAddress(host, port))
 {code}
 this line will throw an exception 
 '*java.nio.channels.UnresolvedAddressException*' when broker host does not 
 exist, and at this same time the field '*connected*' is false;
 In *kafka.producer.SyncProducer*, '*disconnect()*' will not invoke 
 '*blockingChannel.disconnect()*' because '*blockingChannel.isConnected*' is 
 false which means the FileDescriptor will be 

kafka consumer

2014-12-28 Thread panqing...@163.com

HI, 

I recently learning Kafka, there are several problems 

1, ActiveMQ is broker push message, consumer established the messagelistener 
gets the message, but the message in Kafka are consumer pull from broker, 
Timing acquisition from brokeror can build the listener on the broker? 
2, I now have more than one consumer, to consume the same topic, should put 
them in the same group? 

3, this value should be set to zookeeper.session.timeout.ms how much? 400ms 
java example, but will appear  Unable to connect to zookeeper server within 
timeout: 400


panqing...@163.com


[jira] [Created] (KAFKA-1833) OfflinePartitionLeaderSelector may return null leader when ISR and Assgined Broker have no common

2014-12-28 Thread xiajun (JIRA)
xiajun created KAFKA-1833:
-

 Summary: OfflinePartitionLeaderSelector may return null leader 
when ISR and Assgined Broker have no common
 Key: KAFKA-1833
 URL: https://issues.apache.org/jira/browse/KAFKA-1833
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8.2
Reporter: xiajun
Assignee: Neha Narkhede


In OfflinePartitonLeaderSelector::selectLeader, when liveBrokerInIsr is not 
empty and have no common broker with liveAssignedreplicas, selectLeader will 
return no leader;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1833: OfflinePartitionLeaderSelector may...

2014-12-28 Thread tedxia
GitHub user tedxia opened a pull request:

https://github.com/apache/kafka/pull/39

KAFKA-1833: OfflinePartitionLeaderSelector may return null leader when ISR 
and Assgi...

In OfflinePartitonLeaderSelector::selectLeader, when liveBrokerInIsr is not 
empty and have no common broker with liveAssignedreplicas, selectLeader will 
return no leader;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedxia/kafka fix-select-leader

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/39.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #39


commit 6f20bdc9ebe7b4a56de492c98e7a89ac7e3985ba
Author: xiajun xia...@xiaomi.com
Date:   2014-12-29T07:22:54Z

OfflinePartitionLeaderSelector may return null leader when ISR and Assgined 
Broker have no common broker




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1833) OfflinePartitionLeaderSelector may return null leader when ISR and Assgined Broker have no common

2014-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259911#comment-14259911
 ] 

ASF GitHub Bot commented on KAFKA-1833:
---

GitHub user tedxia opened a pull request:

https://github.com/apache/kafka/pull/39

KAFKA-1833: OfflinePartitionLeaderSelector may return null leader when ISR 
and Assgi...

In OfflinePartitonLeaderSelector::selectLeader, when liveBrokerInIsr is not 
empty and have no common broker with liveAssignedreplicas, selectLeader will 
return no leader;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedxia/kafka fix-select-leader

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/39.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #39


commit 6f20bdc9ebe7b4a56de492c98e7a89ac7e3985ba
Author: xiajun xia...@xiaomi.com
Date:   2014-12-29T07:22:54Z

OfflinePartitionLeaderSelector may return null leader when ISR and Assgined 
Broker have no common broker




 OfflinePartitionLeaderSelector may return null leader when ISR and Assgined 
 Broker have no common
 -

 Key: KAFKA-1833
 URL: https://issues.apache.org/jira/browse/KAFKA-1833
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 0.8.2
Reporter: xiajun
Assignee: Neha Narkhede
  Labels: easyfix

 In OfflinePartitonLeaderSelector::selectLeader, when liveBrokerInIsr is not 
 empty and have no common broker with liveAssignedreplicas, selectLeader will 
 return no leader;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-1834) No Response when handle LeaderAndIsrRequest some case

2014-12-28 Thread xiajun (JIRA)
xiajun created KAFKA-1834:
-

 Summary: No Response when handle LeaderAndIsrRequest some case
 Key: KAFKA-1834
 URL: https://issues.apache.org/jira/browse/KAFKA-1834
 Project: Kafka
  Issue Type: Bug
Reporter: xiajun


When a replica become leader or follower, if this broker no exist in assigned 
replicas, there are no response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1834) No Response when handle LeaderAndIsrRequest some case

2014-12-28 Thread xiajun (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiajun updated KAFKA-1834:
--
Affects Version/s: 0.8.2

 No Response when handle LeaderAndIsrRequest some case
 -

 Key: KAFKA-1834
 URL: https://issues.apache.org/jira/browse/KAFKA-1834
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: xiajun
  Labels: easyfix

 When a replica become leader or follower, if this broker no exist in assigned 
 replicas, there are no response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-1834: No Response when handle LeaderAndI...

2014-12-28 Thread tedxia
GitHub user tedxia opened a pull request:

https://github.com/apache/kafka/pull/40

KAFKA-1834: No Response when handle LeaderAndIsrRequest some case

PR for [KAFKA-1834](https://issues.apache.org/jira/browse/KAFKA-1834)

When a replica become leader or follower, if this broker no exist in 
assigned replicas, there are no response.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedxia/kafka 
fix-noresponse-on-become-leader-or-follower

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/40.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #40


commit 80858f72c39eaa974a6085f78e797de4e8c55aae
Author: xiajun xia...@xiaomi.com
Date:   2014-12-29T07:53:44Z

No Response when handle LeaderAndIsrRequest some case




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1834) No Response when handle LeaderAndIsrRequest some case

2014-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259919#comment-14259919
 ] 

ASF GitHub Bot commented on KAFKA-1834:
---

GitHub user tedxia opened a pull request:

https://github.com/apache/kafka/pull/40

KAFKA-1834: No Response when handle LeaderAndIsrRequest some case

PR for [KAFKA-1834](https://issues.apache.org/jira/browse/KAFKA-1834)

When a replica become leader or follower, if this broker no exist in 
assigned replicas, there are no response.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedxia/kafka 
fix-noresponse-on-become-leader-or-follower

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/40.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #40


commit 80858f72c39eaa974a6085f78e797de4e8c55aae
Author: xiajun xia...@xiaomi.com
Date:   2014-12-29T07:53:44Z

No Response when handle LeaderAndIsrRequest some case




 No Response when handle LeaderAndIsrRequest some case
 -

 Key: KAFKA-1834
 URL: https://issues.apache.org/jira/browse/KAFKA-1834
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2
Reporter: xiajun
  Labels: easyfix

 When a replica become leader or follower, if this broker no exist in assigned 
 replicas, there are no response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)