One more thing:
I saw that the Python client is also unaffected by addition of partitions
to a topic and that it continues to send requests only to the old
partitions.
is this also handled appropriately by the Java producer? Will he see the
change and produce to the new partitions as well?
Shlomi
I guess I should mention that exoscale (https://exoscale.ch) is powered by
kafka as well.
Cheers,
- pyr
On Sun, Nov 9, 2014 at 7:36 PM, Gwen Shapira gshap...@cloudera.com wrote:
I'm not Jay, but fixed it anyways ;)
Gwen
On Sun, Nov 9, 2014 at 10:34 AM, vipul jhawar
Hi there,
my company Cityzen Data uses Kafka as well, we provide a paltform for
collecting, storing and analyzing machine data.
http://www.cityzendata.com/
@CityzenData
Mathias.
On Mon, Nov 10, 2014 at 11:57 AM, Pierre-Yves Ritschard
p...@spootnik.org wrote:
I guess I should mention that
I added both Cityzen Data and Exoscale to the Wiki. Please feel free to
edit and expand on how more. If you need edit permission let me know.
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter:
Hmmm..
The Java producer example seems to ignore added partitions too...
How can I auto refresh keyed producers to use new partitions as these
partitions are added?
On Mon, Nov 10, 2014 at 12:33 PM, Shlomi Hazan shl...@viber.com wrote:
One more thing:
I saw that the Python client is also
Awesome! updated, thanks!
On Mon, Nov 10, 2014 at 6:56 AM, Yann Schwartz y.schwa...@criteo.com
wrote:
Hello,
I'd just chime in then. Criteo ( http://www.criteo.com ) has been using
kafka for over a year for stream processing and log transfer (over 2M
messages/s and growing)
Yann.
Hi Marco,
The fetch error comes from UnresolvedAddressException, could you try to
check if you have a network partition issue during that time?
As for the Too many file handlers, I think this is due to not properly
handling such exceptions that it does not close the socket in time, which
version
Just some additions to Chia-Chun's response: each topic can have multiple
partitions and each partition can be replicated as multiple replicas on
different machines, acks = n means that the data sent to a particular
partition has been replicated to at least n replicas.
Guozhang
On Sun, Nov 9,
We're using kafka 0.8.1.1.
About network partition, it is an option.
now i'm just wondering if deleting the data folder on the second node will at
least have it come up again.
i think another guy tried a kafka-reassign-partitions just before it all blew
up.
Il Lunedì 10 Novembre 2014 16:36,
Oo, add us too!
The Wikimedia Foundation (http://wikimediafoundation.org/wiki/Our_projects)
uses Kafka as a log transport for analytics data from production webservers and
applications. This data is consumed into Hadoop using Camus and to other
processors of analytics data.
On Nov 10,
Cool, updated, thanks!!!
On Mon, Nov 10, 2014 at 7:57 AM, Andrew Otto ao...@wikimedia.org wrote:
Oo, add us too!
The Wikimedia Foundation (http://wikimediafoundation.org/wiki/Our_projects)
uses Kafka as a log transport for analytics data from production webservers
and applications. This
You do not need to delete the data folder, I think file handles here are
mostly due to socket leaks, i.e. network socket file handlers, not disk
file handlers. Just restart the broker should do the work.
Guozhang
On Mon, Nov 10, 2014 at 7:47 AM, Marco zentrop...@yahoo.co.uk wrote:
We're using
I had different experience with expanding partition for new producer and
its impact. I only tried for non-key message.I would always advice to
keep batch size relatively low or plan for expansion with new java producer
in advance or since inception otherwise running producer code is impacted.
You are right. The swap will be skipped in that case. It seems this
mechanism does not prevent scenarios where the storage system's hard crash.
An orthogonal note: I originally though renameTo in Linux is atomic, but
after reading some JavaDocs I think maybe we should use nio.File.move to be
Thanks. That worked just fine!
Il Lunedì 10 Novembre 2014 17:53, Guozhang Wang wangg...@gmail.com ha scritto:
You do not need to delete the data folder, I think file handles here are
mostly due to socket leaks, i.e. network socket file handlers, not disk file
handlers. Just restart the
Thanks Jun for your response.
Here is my scenario:
topicCountMap.put(topic, new Integer(2));
MapString, ListKafkaStreambyte[], byte[] consumerMap =
consumer.createMessageStreams(topicCountMap);
ListKafkaStreambyte[], byte[] streams = consumerMap.get(topic);
So from above scenario (only 1
Thanks, Neha. I tried the same test with 0.8.2-beta and am happy to report
I've been unable to reproduce the bad behavior. I'll follow up if this
changes.
On Sun, Nov 9, 2014 at 9:30 PM, Neha Narkhede neha.narkh...@gmail.com
wrote:
We fixed a couple issues related to automatic leader balancing
I am embarrassed to admit but I can't get a basic 'word count' to work
under Kafka/Spark streaming. My code looks like this. I don't see any
word counts in console output. Also, don't see any output in UI. Needless
to say, I am newbie in both 'Spark' as well as 'Kafka'.
Please help. Thanks.
I am not running locally. The Spark master is:
spark://machine name:7077
On Mon, Nov 10, 2014 at 3:47 PM, Tathagata Das tathagata.das1...@gmail.com
wrote:
What is the Spark master that you are using. Use local[4], not local
if you are running locally.
On Mon, Nov 10, 2014 at 3:01 PM,
Hey guys,
i am using kafka_2.9.2-0.8.1.1
bin/kafka-topics.sh --zookeeper localhost:2182 --alter --topic my_topic
--config log.retention.hours.per.topic=48
It says:
Error while executing topic command requirement failed: Unknown
configuration log.retention.hours.per.topic.
How can I auto refresh keyed producers to use new partitions as these
partitions are added?
Try using the new producer under org.apache.kafka.clients.producer.
Thanks,
Neha
On Mon, Nov 10, 2014 at 8:52 AM, Bhavesh Mistry mistry.p.bhav...@gmail.com
wrote:
I had different experience with
Hi,
Is there a way to detect which version of Kafka one is running?
Is there an API for that, or a constant with this value, or maybe an MBean
or some other way to get to this info?
Thanks,
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support
Otis,
We don't have an api for that now. We can probably expose this as a JMX as
part of kafka-1481.
Thanks,
Jun
On Mon, Nov 10, 2014 at 7:17 PM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Hi,
Is there a way to detect which version of Kafka one is running?
Is there an API for
Don't you have the same problem using SimpleConsumer? How does another
process know a SimpleConsumer hangs?
Thanks,
Jun
On Mon, Nov 10, 2014 at 9:47 AM, Srinivas Reddy Kancharla
getre...@gmail.com wrote:
Thanks Jun for your response.
Here is my scenario:
topicCountMap.put(topic, new
24 matches
Mail list logo