Ted - it depends on your domain. More conservative approaches to long lived
data protect against data corruption, which generally means snapshots and cold
storage.
> On 15 Feb 2016, at 21:31, Ted Swerve wrote:
>
> HI Ben, Sharninder,
>
> Thanks for your responses, I appreciate it.
>
> Ben
HI Ben, Sharninder,
Thanks for your responses, I appreciate it.
Ben - thanks for the tips on settings. A backup could certainly be a
possibility, although if only with similar durability guarantees, I'm not
sure what the purpose would be?
Sharninder - yes, we would only be using the logs as forw
+1 (binding).
Verified source and binary artifacts, ran ./gradlew testAll, quick start on
source artifact and Scala 2.11 binary artifact.
On Mon, Feb 15, 2016 at 7:43 PM, Ewen Cheslack-Postava
wrote:
> Yeah, I saw
>
> kafka.network.SocketServerTest > tooBigRequestIsRejected FAILED
> java.ne
Yeah, I saw
kafka.network.SocketServerTest > tooBigRequestIsRejected FAILED
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputS
This topic comes up often on this list. Kafka can be used as a datastore if
that’s what your application wants with the caveat that Kafka isn’t designed to
keep data around forever. There is a default retention time after which older
data gets deleted. The high level consumer essentially reads d
You can follow the instructions here
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
On Mon, 15 Feb 2016 at 03:32 Nikhil Bhaware
wrote:
> Hi,
> I have 6 node kafka cluster on which i created topic with 3
> replication-factor and 1000
+1 (non-binding).
Verified source and binary artifacts, ran ./gradlew testAll with JDK 7u80,
quick start on source artifact and Scala 2.11 binary artifact.
Ismael
On Fri, Feb 12, 2016 at 2:55 AM, Jun Rao wrote:
> This is the first candidate for release of Apache Kafka 0.9.0.1. This a bug
> fix
Hi Ted
This is an interesting question.
Kafka has similar resilience properties to other distributed stores such as
Cassandra, which are used as master data stores (obviously without the query
functions). You’d need to set unclean.leader.election.enable=false and
configure sufficient replicat
Any ideas as to which property should I set to enable Zookeeper
re-connection? I have the following properties defined for my consumer
(High Level Consumer API). Is this enough for a automatic Zookeeper
re-connect?
val props = new Properties()
props.put("zookeeper.connect", zookeeper)
props.put("g
Even I would like to know what options I have got to ping Kafka using the
0.8.2.1 client. Any suggestions please?
On Mon, Feb 15, 2016 at 6:28 PM, Franco Giacosa wrote:
> Hi,
>
> To ping kafka for a health check, what are my options if I am using the
> java client 0.9.0?
>
> I know that the con
Hi,
To ping kafka for a health check, what are my options if I am using the
java client 0.9.0?
I know that the confluent plataform has an Api Proxy, but it needs the
schema registry (which I am not running) (also I don't know if the schema
registry is a dependency if I use only the health check
Hello,
Is it viable to use infinite-retention Kafka topics as a master data
store? I'm not talking massive volumes of data here, but still potentially
extending into tens of terabytes.
Are there any drawbacks or pitfalls to such an approach? It seems like a
compelling design, but there seem to
Hello,
Is it viable to use infinite-retention Kafka topics as a master data
store? I'm not talking massive volumes of data here, but still potentially
extending into tens of terabytes.
Are there any drawbacks or pitfalls to such an approach? It seems like a
compelling design, but there seem to
Another update.
The problem appeared again. The consumer is stalling at certain offsets.
Anyone has an idea of what can be happening?
Anything that I could add that might help, let me know
2016-02-12 10:29 GMT-03:00 Maximiliano Patricio Méndez :
> Hi,
>
> An update about this.
>
> I've recreat
you could use JMX to retrieve the version number:
in kafka 0.9 its here: kafka.server:type=app-info,id=
in kafka 0.8 its here: kafka.common:name=Version,type=AppInfo
Cheers
Fabian
2016-02-15 17:36 GMT+01:00 Yousef Abu Ulbeh :
> Hi,
>
> i am using Kafka with our product and i want to know how we
[root@itfadfae kafka_2.11-0.9.0.0]# bin/kafka-console-consumer.sh
--consumer.config config/consumer.properties --zookeeper 10.110.16.76:2181
--topic partitiontest [2016-02-15 14:34:01,053] WARN Property
enable.auto.commit is not valid (kafka.utils.VerifiableProperties)
Hi,
i am using Kafka with our product and i want to know how we can get kafka
version and staus using java ?
Thanks,
I have a Kafka broker and Zookeeper running locally. I use the high level
consumer API to read messages from a topic. Now I manually disconnect /
shutdown the Zookeeper instance running on my local machine.
I can see in my consumer logs the following:
20160215-16:03:43.110+0100
[kafka-consumer
Hi there,
I want to implement the Offset Commit/Fetch api functionality in our in house
.net client.
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommit/FetchAPI
It seems the documentation is incomplete and clearly not of the sam
Hi Andre,
Please see KIP-41:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-41%3A+KafkaConsumer+Max+Records
The aim is to include this in the next release of Kafka.
Ismael
On Mon, Feb 15, 2016 at 12:21 PM, André wrote:
> Hi there
>
> I've just started evaluating Kafka as an additional
Hi there
I've just started evaluating Kafka as an additional message broker to be
supported by the platform I work for. Currently, we support AMQP and JMS.
One of our use-cases for messaging is to use queues for distributing tasks
to workers as they become available.
I've noticed that by using a
Hi!
I'm investigating Kafka for transferring data between multiple
geographical locations, and some of them are quite far away and on bad
network links. For example, data might originate somewhere in the less
connected parts of Asia and should be mirrored to a Kafka cluster in Europe.
How wi
Hi,
It is a bug in the consumer that has been fixed by KAFKA-2978. You should
try building the consumer from the latest 0.9.0 branch (or the 0.9.0.1 RC).
I've had the same issue and confirmed it works fine on the latest 0.9.0.
Thanks,
Damian
On 14 February 2016 at 18:50, Anurag Laddha wrote:
>
Kafka is pretty nice and as long as you have basic monitoring in place,
doesn't take too much attention, but keep in mind that it still depends on
zookeeper and I've seen that being the bottleneck. in the past. I also
think as a single engineering person in your startup, if you don't need
kafka or
24 matches
Mail list logo