consumer property - zookeeper.sync.time.ms

2015-07-23 Thread Kris K
Hi,

Can someone please throw some light on the consumer property
zookeeper.sync.time.ms? What are the implications of decreasing it to lower
than 2 secs ?

I read the description from documentation but could not understand it
completely.

Thanks,
Kris


Re: wow--kafka--why? unresolved dependency: com.eed3si9n#sbt-assembly;0.8.8: not found

2015-07-23 Thread David Montgomery
Thanks

Now I get the below error  I am on ubuntu 14.04.


sudo add-apt-repository ppa:cwchien/gradle
sudo apt-get update
sudo apt-get install --upgrade gradle
git clone https://github.com/apache/kafka.git
cd kafka
git checkout -b 0.8.2.1
gradle --debug




04:32:54.478 [DEBUG]
[org.apache.http.impl.conn.PoolingClientConnectionManager] Connection
released: [id: 0][route: {s}-https://repo1.maven.org][total kept alive: 0;
route allocated: 0 of 5; total allocated: 0 of 10]
04:32:54.483 [DEBUG]
[org.gradle.api.internal.artifacts.ivyservice.ivyresolve.CachingModuleComponentRepository]
Downloaded artifact 'org.eclipse.jgit.jar
(org.eclipse.jgit:org.eclipse.jgit:3.3.0.201403021825-r)' from resolver:
MavenRepo
04:32:54.492 [DEBUG]
[org.gradle.configuration.project.BuildScriptProcessor] Timing: Running the
build script took 3 mins 9.481 secs
04:32:54.517 [ERROR] [org.gradle.BuildExceptionReporter]
04:32:54.523 [ERROR] [org.gradle.BuildExceptionReporter] FAILURE: Build
failed with an exception.
04:32:54.525 [ERROR] [org.gradle.BuildExceptionReporter]
04:32:54.525 [ERROR] [org.gradle.BuildExceptionReporter] * What went wrong:
04:32:54.526 [ERROR] [org.gradle.BuildExceptionReporter] A problem occurred
configuring root project 'kafka'.
04:32:54.532 [ERROR] [org.gradle.BuildExceptionReporter]  Could not
resolve all dependencies for configuration ':classpath'.
04:32:54.533 [ERROR] [org.gradle.BuildExceptionReporter] Could not
download org.eclipse.jgit.jar
(org.eclipse.jgit:org.eclipse.jgit:3.3.0.201403021825-r)
04:32:54.534 [ERROR] [org.gradle.BuildExceptionReporter]Could not
get resource '
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.jar
'.
04:32:54.539 [ERROR] [org.gradle.BuildExceptionReporter]   SSL
peer shut down incorrectly
04:32:54.540 [ERROR] [org.gradle.BuildExceptionReporter]
04:32:54.541 [ERROR] [org.gradle.BuildExceptionReporter] * Try:
04:32:54.554 [ERROR] [org.gradle.BuildExceptionReporter] Run with
--stacktrace option to get the stack trace.
04:32:54.556 [LIFECYCLE] [org.gradle.BuildResultLogger]
04:32:54.556 [LIFECYCLE] [org.gradle.BuildResultLogger] BUILD FAILED
04:32:54.557 [LIFECYCLE] [org.gradle.BuildResultLogger]
04:32:54.558 [LIFECYCLE] [org.gradle.BuildResultLogger] Total time: 3 mins
17.746 secs
04:32:54.559 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on buildscript class cache for settings file
'/var/kafka/settings.gradle'
(/root/.gradle/caches/2.5/scripts/settings_e2kr53ghhe98efma3q4becl82/SettingsScript/buildscript).
04:32:54.565 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on no_buildscript class cache for settings file
'/var/kafka/settings.gradle'
(/root/.gradle/caches/2.5/scripts/settings_e2kr53ghhe98efma3q4becl82/SettingsScript/no_buildscript).
04:32:54.566 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on buildscript class cache for script
'/var/kafka/scala.gradle'
(/root/.gradle/caches/2.5/scripts/scala_4nk8rzqzkzli700kqaewv9bt8/DefaultScript/buildscript).
04:32:54.566 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on no_buildscript class cache for script
'/var/kafka/scala.gradle'
(/root/.gradle/caches/2.5/scripts/scala_4nk8rzqzkzli700kqaewv9bt8/DefaultScript/no_buildscript).
04:32:54.569 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on buildscript class cache for build file
'/var/kafka/build.gradle'
(/root/.gradle/caches/2.5/scripts/build_6tzry9d19exc5mwsn5i8quh6j/ProjectScript/buildscript).
04:32:54.570 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on buildscript class cache for script
'/var/kafka/gradle/buildscript.gradle'
(/root/.gradle/caches/2.5/scripts/buildscript_1vxtwern8bk8c0aam5ho9cjpa/DefaultScript/buildscript).
04:32:54.570 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on no_buildscript class cache for script
'/var/kafka/gradle/buildscript.gradle'
(/root/.gradle/caches/2.5/scripts/buildscript_1vxtwern8bk8c0aam5ho9cjpa/DefaultScript/no_buildscript).
04:32:54.594 [DEBUG] [org.gradle.cache.internal.DefaultCacheAccess] Cache
Plugin Resolution Cache (/root/.gradle/caches/2.5/plugin-resolution) was
closed 0 times.
04:32:54.599 [DEBUG]
[org.gradle.cache.internal.btree.BTreePersistentIndexedCache] Closing cache
artifact-at-url.bin
(/root/.gradle/caches/modules-2/metadata-2.15/artifact-at-url.bin)
04:32:54.627 [DEBUG]
[org.gradle.cache.internal.btree.BTreePersistentIndexedCache] Closing cache
artifact-at-repository.bin
(/root/.gradle/caches/modules-2/metadata-2.15/artifact-at-repository.bin)
04:32:54.629 [DEBUG]
[org.gradle.cache.internal.btree.BTreePersistentIndexedCache] Closing cache
module-metadata.bin
(/root/.gradle/caches/modules-2/metadata-2.15/module-metadata.bin)
04:32:54.633 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager]
Releasing lock on artifact cache 

Re: ZK chroot path would be automatically created since Kafka 0.8.2.0?

2015-07-23 Thread yewton

Hi Gwen,

I've created the ticket: https://issues.apache.org/jira/browse/KAFKA-2357
Thanks for your reply.


On 2015/07/22 16:37, Gwen Shapira wrote:

You are right, this sounds like a doc bug. Do you mind filing a JIRA
ticket (http://issues.apache.org/jira/browse/KAFKA) so we can keep
track of this issue?



On Tue, Jul 21, 2015 at 7:43 PM, yewton yew...@gmail.com wrote:

Hi,

The document about zookeeper.connect on Broker Configs says that
Note that you must create this path yourself prior to starting the broker,
but it seems the broker creates the path automatically on start up
(maybe related issue: https://issues.apache.org/jira/browse/KAFKA-404 ).

So the document is not just up to date?

Thanks,
Yuto Sasaki


Disaster Recovery

2015-07-23 Thread Luke Kysow
Hello All, would very much appreciate your thoughts and experiences on
backup, restore, and disaster recovery.

In the confluent.io docs (
http://docs.confluent.io/1.0/kafka/post-deployment.html under Backup 
Restoration) it says the best way to backup your cluster is to set up a
mirror.

1. Given a mirror (call it B), if all brokers in the main cluster (A) go
down, how would I bring it back up with the mirror? Would I

   - bring up a new A cluster and set it to mirror from B but not allow it
   to be produced to and not connect it to the A zookeeper (I assume
   connecting it to the A zookeeper while it is mirroring from B would cause
   issues)
   - when it's in-sync, stop the mirroring. Then allow it to be produced to
   again and for B to mirror from A again.


2. Is it possible to rebuild a cluster using EBS Snapshots (we're in AWS)?

   - Assume all kafka brokers go down
   - Spin up a new broker and attach its latest snapshot
   - Spin up new follower brokers and wait for them to replicate

What happens to zookeeper during this period? Also I assume we can't bring
up all three brokers from snapshots because their data would be out of sync
and I'm not sure if Kafka can handle this situation.

3. What else are you guys doing for disaster recovery?

Thanks in advance for any help.


-- 
   http://hootsuite.com
*Luke Kysow*
Software Engineer | Hootsuite https://www.hootsuite.com
We are hiring in a *big* way! Apply now http://hootsuite.com/careers


Re: wow--kafka--why? unresolved dependency: com.eed3si9n#sbt-assembly;0.8.8: not found

2015-07-23 Thread Ewen Cheslack-Postava
I think you're just having connectivity issues with Maven Central. I just
ran your exact set of commands in a VM and the final gradle command
(without --debug) ran fine and you can see it downloaded the file you had
trouble with (
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.jar).
A subsequent ./gradlew jar worked fine as well.

Could be a transient network issue, or a proxy issue if you're using
something like Archiva/Nexus/Artifactory.

Output for reference:

vagrant@vagrant-ubuntu-trusty-64:~/kafka$ gradle
To honour the JVM settings for this build a new JVM will be forked. Please
consider using the daemon:
http://gradle.org/docs/2.5/userguide/gradle_daemon.html.
Download
https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.pom
Download
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.pom
Download
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit-parent/3.3.0.201403021825-r/org.eclipse.jgit-parent-3.3.0.201403021825-r.pom
Download
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy/0.0.7/jsch.agentproxy-0.0.7.pom
Download
https://repo1.maven.org/maven2/org/sonatype/oss/oss-parent/6/oss-parent-6.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.pom
Download
https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.6/slf4j-api-1.7.6.pom
Download
https://repo1.maven.org/maven2/org/slf4j/slf4j-parent/1.7.6/slf4j-parent-1.7.6.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.46/jsch-0.1.46.pom
Download
https://repo1.maven.org/maven2/com/googlecode/javaewah/JavaEWAH/0.7.9/JavaEWAH-0.7.9.pom
Download
https://repo1.maven.org/maven2/org/sonatype/oss/oss-parent/5/oss-parent-5.pom
Download
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpclient/4.1.3/httpclient-4.1.3.pom
Download
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcomponents-client/4.1.3/httpcomponents-client-4.1.3.pom
Download
https://repo1.maven.org/maven2/org/apache/httpcomponents/project/5/project-5.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.core/0.0.7/jsch.agentproxy.core-0.0.7.pom
Download
https://repo1.maven.org/maven2/net/java/dev/jna/jna/3.4.0/jna-3.4.0.pom
Download
https://repo1.maven.org/maven2/net/java/dev/jna/platform/3.4.0/platform-3.4.0.pom
Download
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.1.4/httpcore-4.1.4.pom
Download
https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcomponents-core/4.1.4/httpcomponents-core-4.1.4.pom
Download
https://repo1.maven.org/maven2/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.pom
Download
https://repo1.maven.org/maven2/org/apache/commons/commons-parent/5/commons-parent-5.pom
Download https://repo1.maven.org/maven2/org/apache/apache/4/apache-4.pom
Download
https://repo1.maven.org/maven2/commons-codec/commons-codec/1.4/commons-codec-1.4.pom
Download
https://repo1.maven.org/maven2/org/apache/commons/commons-parent/11/commons-parent-11.pom
Download
https://repo1.maven.org/maven2/com/jcraft/jsch/0.1.49/jsch-0.1.49.pom
Download
https://repo1.maven.org/maven2/org/ajoberstar/grgit/0.2.3/grgit-0.2.3.jar
Download
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit/3.3.0.201403021825-r/org.eclipse.jgit-3.3.0.201403021825-r.jar
Download
https://repo1.maven.org/maven2/org/eclipse/jgit/org.eclipse.jgit.ui/3.3.0.201403021825-r/org.eclipse.jgit.ui-3.3.0.201403021825-r.jar
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.jsch/0.0.7/jsch.agentproxy.jsch-0.0.7.jar
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.pageant/0.0.7/jsch.agentproxy.pageant-0.0.7.jar
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.sshagent/0.0.7/jsch.agentproxy.sshagent-0.0.7.jar
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-jna/0.0.7/jsch.agentproxy.usocket-jna-0.0.7.jar
Download
https://repo1.maven.org/maven2/com/jcraft/jsch.agentproxy.usocket-nc/0.0.7/jsch.agentproxy.usocket-nc-0.0.7.jar
Download
https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.6/slf4j-api-1.7.6.jar
Download
https://repo1.maven.org/maven2/com/googlecode/javaewah/JavaEWAH/0.7.9/JavaEWAH-0.7.9.jar
Download

Re: wow--kafka--why? unresolved dependency: com.eed3si9n#sbt-assembly;0.8.8: not found

2015-07-23 Thread Ewen Cheslack-Postava
Also, the branch you're checking out is very old. If you want the most
recent release, that's tagged as 0.8.2.1. Otherwise, you'll want to use the
trunk branch.

-Ewen

On Thu, Jul 23, 2015 at 11:45 AM, Gwen Shapira gshap...@cloudera.com
wrote:

 Sorry, we don't actually do SBT builds anymore.

 You can build successfully using Gradle:

 You need to have [gradle](http://www.gradle.org/installation) installed.

 ### First bootstrap and download the wrapper ###
 cd kafka_source_dir
 gradle

 Now everything else will work

 ### Building a jar and running it ###
 ./gradlew jar


 Can you let us know where you saw the SBT instructions, so we can fix it?

 On Thu, Jul 23, 2015 at 11:39 AM, David Montgomery
 davidmontgom...@gmail.com wrote:
  Just wondering I am getting this very disapointing error with kafka
 install.
 
  git clone https://git-wip-us.apache.org/repos/asf/kafka.git
  cd kafka
  git checkout -b 0.8 remotes/origin/0.8
  ./sbt ++2.9.2 update
 
 
  Thanks
 
 
  [warn]
 
 http://repo1.maven.org/maven2/com/jsuereth/xsbt-gpg-plugin_2.9.2_0.12/0.6/xsbt-gpg-plugin-0.6.pom
  [info] Resolving org.scala-sbt#precompiled-2_10_0-m7;0.12.1 ...
  [warn] ::
  [warn] ::  UNRESOLVED DEPENDENCIES ::
  [warn] ::
  [warn] :: com.eed3si9n#sbt-assembly;0.8.8: not found
  [warn] :: com.jsuereth#xsbt-gpg-plugin;0.6: not found
  [warn] ::
  [warn]
  [warn] Note: Some unresolved dependencies have extra attributes.
 Check
  that these dependencies exist with the requested attributes.
  [warn] com.eed3si9n:sbt-assembly:0.8.8 (sbtVersion=0.12,
  scalaVersion=2.9.2)
  [warn] com.jsuereth:xsbt-gpg-plugin:0.6 (sbtVersion=0.12,
  scalaVersion=2.9.2)
  [warn]
  sbt.ResolveException: unresolved dependency:
  com.eed3si9n#sbt-assembly;0.8.8: not found
  unresolved dependency: com.jsuereth#xsbt-gpg-plugin;0.6: not found
  at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:214)
  at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:122)
  at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:121)
  at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
  at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
  at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:102)
  at sbt.IvySbt.liftedTree1$1(Ivy.scala:49)
  at sbt.IvySbt.action$1(Ivy.scala:49)
  at sbt.IvySbt$$anon$3.call(Ivy.scala:58)
  at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:75)
  at xsbt.boot.Locks$GlobalLock.withChannelRetries$1(Locks.scala:58)
  at
  xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:79)
  at xsbt.boot.Using$.withResource(Using.scala:11)
  at xsbt.boot.Using$.apply(Using.scala:10)
  at xsbt.boot.Locks$GlobalLock.liftedTree1$1(Locks.scala:51)
  at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:51)
  at xsbt.boot.Locks$.apply0(Locks.scala:30)
  at xsbt.boot.Locks$.apply(Locks.scala:27)
  at sbt.IvySbt.withDefaultLogger(Ivy.scala:58)
  at sbt.IvySbt.withIvy(Ivy.scala:99)
  at sbt.IvySbt.withIvy(Ivy.scala:95)
  at sbt.IvySbt$Module.withModule(Ivy.scala:114)
  at sbt.IvyActions$.update(IvyActions.scala:121)
  at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:951)
  at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:949)
  at
  sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:972)
  at
  sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:970)
  at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
  at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:974)
  at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:969)
  at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
  at sbt.Classpaths$.cachedUpdate(Defaults.scala:977)
  at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:856)
  at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:853)
  at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
  at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
  at scala.Function1$$anonfun$compose$1.apply(Function1.scala:49)
  at
 
 sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
  at
 
 sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
  at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:41)
  at sbt.std.Transform$$anon$5.work(System.scala:71)
  at
  sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
  at
  sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
  at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
  at sbt.Execute.work(Execute.scala:238)
  at 

wow--kafka--why? unresolved dependency: com.eed3si9n#sbt-assembly;0.8.8: not found

2015-07-23 Thread David Montgomery
Just wondering I am getting this very disapointing error with kafka install.

git clone https://git-wip-us.apache.org/repos/asf/kafka.git
cd kafka
git checkout -b 0.8 remotes/origin/0.8
./sbt ++2.9.2 update


Thanks


[warn]
http://repo1.maven.org/maven2/com/jsuereth/xsbt-gpg-plugin_2.9.2_0.12/0.6/xsbt-gpg-plugin-0.6.pom
[info] Resolving org.scala-sbt#precompiled-2_10_0-m7;0.12.1 ...
[warn] ::
[warn] ::  UNRESOLVED DEPENDENCIES ::
[warn] ::
[warn] :: com.eed3si9n#sbt-assembly;0.8.8: not found
[warn] :: com.jsuereth#xsbt-gpg-plugin;0.6: not found
[warn] ::
[warn]
[warn] Note: Some unresolved dependencies have extra attributes.  Check
that these dependencies exist with the requested attributes.
[warn] com.eed3si9n:sbt-assembly:0.8.8 (sbtVersion=0.12,
scalaVersion=2.9.2)
[warn] com.jsuereth:xsbt-gpg-plugin:0.6 (sbtVersion=0.12,
scalaVersion=2.9.2)
[warn]
sbt.ResolveException: unresolved dependency:
com.eed3si9n#sbt-assembly;0.8.8: not found
unresolved dependency: com.jsuereth#xsbt-gpg-plugin;0.6: not found
at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:214)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:122)
at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:121)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:102)
at sbt.IvySbt.liftedTree1$1(Ivy.scala:49)
at sbt.IvySbt.action$1(Ivy.scala:49)
at sbt.IvySbt$$anon$3.call(Ivy.scala:58)
at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:75)
at xsbt.boot.Locks$GlobalLock.withChannelRetries$1(Locks.scala:58)
at
xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:79)
at xsbt.boot.Using$.withResource(Using.scala:11)
at xsbt.boot.Using$.apply(Using.scala:10)
at xsbt.boot.Locks$GlobalLock.liftedTree1$1(Locks.scala:51)
at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:51)
at xsbt.boot.Locks$.apply0(Locks.scala:30)
at xsbt.boot.Locks$.apply(Locks.scala:27)
at sbt.IvySbt.withDefaultLogger(Ivy.scala:58)
at sbt.IvySbt.withIvy(Ivy.scala:99)
at sbt.IvySbt.withIvy(Ivy.scala:95)
at sbt.IvySbt$Module.withModule(Ivy.scala:114)
at sbt.IvyActions$.update(IvyActions.scala:121)
at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:951)
at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:949)
at
sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:972)
at
sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:970)
at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:974)
at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:969)
at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
at sbt.Classpaths$.cachedUpdate(Defaults.scala:977)
at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:856)
at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:853)
at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:49)
at
sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
at
sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:41)
at sbt.std.Transform$$anon$5.work(System.scala:71)
at
sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
at
sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:238)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:232)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:232)
at
sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[error] (*:update) sbt.ResolveException: unresolved dependency:
com.eed3si9n#sbt-assembly;0.8.8: not found
[error] unresolved dependency: com.jsuereth#xsbt-gpg-plugin;0.6: not found
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?
[3]+  Stopped  

Re: wow--kafka--why? unresolved dependency: com.eed3si9n#sbt-assembly;0.8.8: not found

2015-07-23 Thread Gwen Shapira
Sorry, we don't actually do SBT builds anymore.

You can build successfully using Gradle:

You need to have [gradle](http://www.gradle.org/installation) installed.

### First bootstrap and download the wrapper ###
cd kafka_source_dir
gradle

Now everything else will work

### Building a jar and running it ###
./gradlew jar


Can you let us know where you saw the SBT instructions, so we can fix it?

On Thu, Jul 23, 2015 at 11:39 AM, David Montgomery
davidmontgom...@gmail.com wrote:
 Just wondering I am getting this very disapointing error with kafka install.

 git clone https://git-wip-us.apache.org/repos/asf/kafka.git
 cd kafka
 git checkout -b 0.8 remotes/origin/0.8
 ./sbt ++2.9.2 update


 Thanks


 [warn]
 http://repo1.maven.org/maven2/com/jsuereth/xsbt-gpg-plugin_2.9.2_0.12/0.6/xsbt-gpg-plugin-0.6.pom
 [info] Resolving org.scala-sbt#precompiled-2_10_0-m7;0.12.1 ...
 [warn] ::
 [warn] ::  UNRESOLVED DEPENDENCIES ::
 [warn] ::
 [warn] :: com.eed3si9n#sbt-assembly;0.8.8: not found
 [warn] :: com.jsuereth#xsbt-gpg-plugin;0.6: not found
 [warn] ::
 [warn]
 [warn] Note: Some unresolved dependencies have extra attributes.  Check
 that these dependencies exist with the requested attributes.
 [warn] com.eed3si9n:sbt-assembly:0.8.8 (sbtVersion=0.12,
 scalaVersion=2.9.2)
 [warn] com.jsuereth:xsbt-gpg-plugin:0.6 (sbtVersion=0.12,
 scalaVersion=2.9.2)
 [warn]
 sbt.ResolveException: unresolved dependency:
 com.eed3si9n#sbt-assembly;0.8.8: not found
 unresolved dependency: com.jsuereth#xsbt-gpg-plugin;0.6: not found
 at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:214)
 at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:122)
 at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:121)
 at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
 at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
 at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:102)
 at sbt.IvySbt.liftedTree1$1(Ivy.scala:49)
 at sbt.IvySbt.action$1(Ivy.scala:49)
 at sbt.IvySbt$$anon$3.call(Ivy.scala:58)
 at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:75)
 at xsbt.boot.Locks$GlobalLock.withChannelRetries$1(Locks.scala:58)
 at
 xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:79)
 at xsbt.boot.Using$.withResource(Using.scala:11)
 at xsbt.boot.Using$.apply(Using.scala:10)
 at xsbt.boot.Locks$GlobalLock.liftedTree1$1(Locks.scala:51)
 at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:51)
 at xsbt.boot.Locks$.apply0(Locks.scala:30)
 at xsbt.boot.Locks$.apply(Locks.scala:27)
 at sbt.IvySbt.withDefaultLogger(Ivy.scala:58)
 at sbt.IvySbt.withIvy(Ivy.scala:99)
 at sbt.IvySbt.withIvy(Ivy.scala:95)
 at sbt.IvySbt$Module.withModule(Ivy.scala:114)
 at sbt.IvyActions$.update(IvyActions.scala:121)
 at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:951)
 at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:949)
 at
 sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:972)
 at
 sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:970)
 at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
 at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:974)
 at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:969)
 at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
 at sbt.Classpaths$.cachedUpdate(Defaults.scala:977)
 at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:856)
 at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:853)
 at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
 at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
 at scala.Function1$$anonfun$compose$1.apply(Function1.scala:49)
 at
 sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
 at
 sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
 at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:41)
 at sbt.std.Transform$$anon$5.work(System.scala:71)
 at
 sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
 at
 sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
 at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
 at sbt.Execute.work(Execute.scala:238)
 at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:232)
 at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:232)
 at
 sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
 at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at
 

Re: New consumer - poll/seek javadoc confusing, need clarification

2015-07-23 Thread Jason Gustafson
Hey Stevo,

Thanks for the early testing on the new consumer! This might be a bug. I
wonder if it could also be explained by partition rebalancing. In the
current implementation, a rebalance will clear the old positions (including
those that were seeked to). I think it's debatable whether this behavior is
useful, but it may explain what you're seeing.

-Jason

On Thu, Jul 23, 2015 at 2:10 AM, Stevo Slavić ssla...@gmail.com wrote:

 Strange, if after seek I make several poll requests, eventually it will
 read/return messages from offset that seek set.

 On Thu, Jul 23, 2015 at 11:03 AM, Stevo Slavić ssla...@gmail.com wrote:

  Thanks Ewen for heads up.
 
  It's great that seek is not needed in between poll when business goes as
  usual.
 
  In edge case, when my logic detects it needs to go back and reread events
  from given position in history, I use seek. I found out that next poll
  after seek will not respect offset used in seek. It is strange that event
  Consumer.position returns same offset that seek has set for the consumer
  instance, but poll still does not return messages starting from that
 offset.
 
  E.g. say there are 5 messages published to a single partition of a single
  topic. Consumer subscribes to that topic partition, with
 smallest/earliest
  offset reset strategy configured, and consumer.position confirms that the
  consumer is at position 0.
  Then poll is issued and it returns all 5 messages. Logic processes
  messages, detects it needs to go back in history to position 0, it does
 not
  commit messages but calls seek to 0, position confirms consumer is at
  offset 0. Next poll does not return any messages.
 
  So seek is not really working what it should do, according to its
 javadoc:
 
  /**
   * Overrides the fetch offsets that the consumer will use on the next
  {@link #poll(long) poll(timeout)}. If this API
   * is invoked for the same partition more than once, the latest offset
  will be used on the next poll(). Note that
   * you may lose data if this API is arbitrarily used in the middle of
  consumption, to reset the fetch offsets
   */
 
  I've checked also, calling seek multiple times does not help to get poll
  to use offset set with last seek.
  Could be something is wrong with poll implementation, making it not
  respect offset set with seek.
 
  Kind regards,
  Stevo Slavic.
 
 
  On Wed, Jul 22, 2015 at 7:47 AM, Ewen Cheslack-Postava 
 e...@confluent.io
  wrote:
 
  It should just continue consuming using the existing offsets. It'll have
  to
  refresh metadata to pick up the leadership change, but once it does it
 can
  just pick up where consumption from the previous leader stopped -- all
 the
  ISRs should have the same data, so the new leader will have all the same
  data the previous leader had (assuming unclean leader election is not
  enabled).
 
  On Tue, Jul 21, 2015 at 9:11 PM, James Cheng jch...@tivo.com wrote:
 
  
On Jul 21, 2015, at 9:15 AM, Ewen Cheslack-Postava 
 e...@confluent.io
  
   wrote:
 
   
On Tue, Jul 21, 2015 at 2:38 AM, Stevo Slavić ssla...@gmail.com
  wrote:
   
Hello Apache Kafka community,
   
I find new consumer poll/seek javadoc a bit confusing. Just by
  reading
   docs
I'm not sure what the outcome will be, what is expected in
 following
scenario:
   
- kafkaConsumer is instantiated with auto-commit off
- kafkaConsumer.subscribe(someTopic)
- kafkaConsumer.position is called for every TopicPartition HLC is
   actively
subscribed on
   
and then when doing multiple poll calls in succession (without
  calling
commit), does seek have to be called in between poll calls to
  position
   HLC
to skip what was read in previous poll, or does HLC keep that state
(position after poll) in memory, so that next poll (without seek in
   between
two poll calls) will continue from where last poll stopped?
   
   
The position is tracked in-memory within the consumer, so as long as
   there
isn't a consumer rebalance, consumption will just proceed with
  subsequent
messages (i.e. the behavior I think most people would find
 intuitive).
However, if a rebalance occurs (another consumer instance joins the
  group
or some leave), then a partition may be assigned to an different
  consumer
instance that has no idea about the current position and will
 restart
   based
on the offset reset setting (because attempting to fetch the
 committed
offset will fail since no offsets have been committed).
   
  
   Ewen,
  
   What happens if there is a broker failure and a new broker becomes the
   partition leader? Does the high level consumer start listening to the
  new
   partition leader at the in-memory position, or does it restart based
 on
   saved offsets?
  
   Thanks,
   -James
  
-Ewen
   
   
Could be it's just me not understanding this from javadoc. If not,
  maybe
javadoc can be improved to make this (even) more obvious.
   
Kind regards,

Re: New consumer - poll/seek javadoc confusing, need clarification

2015-07-23 Thread Stevo Slavić
Thanks Ewen for heads up.

It's great that seek is not needed in between poll when business goes as
usual.

In edge case, when my logic detects it needs to go back and reread events
from given position in history, I use seek. I found out that next poll
after seek will not respect offset used in seek. It is strange that event
Consumer.position returns same offset that seek has set for the consumer
instance, but poll still does not return messages starting from that offset.

E.g. say there are 5 messages published to a single partition of a single
topic. Consumer subscribes to that topic partition, with smallest/earliest
offset reset strategy configured, and consumer.position confirms that the
consumer is at position 0.
Then poll is issued and it returns all 5 messages. Logic processes
messages, detects it needs to go back in history to position 0, it does not
commit messages but calls seek to 0, position confirms consumer is at
offset 0. Next poll does not return any messages.

So seek is not really working what it should do, according to its javadoc:

/**
 * Overrides the fetch offsets that the consumer will use on the next
{@link #poll(long) poll(timeout)}. If this API
 * is invoked for the same partition more than once, the latest offset will
be used on the next poll(). Note that
 * you may lose data if this API is arbitrarily used in the middle of
consumption, to reset the fetch offsets
 */

I've checked also, calling seek multiple times does not help to get poll to
use offset set with last seek.
Could be something is wrong with poll implementation, making it not respect
offset set with seek.

Kind regards,
Stevo Slavic.


On Wed, Jul 22, 2015 at 7:47 AM, Ewen Cheslack-Postava e...@confluent.io
wrote:

 It should just continue consuming using the existing offsets. It'll have to
 refresh metadata to pick up the leadership change, but once it does it can
 just pick up where consumption from the previous leader stopped -- all the
 ISRs should have the same data, so the new leader will have all the same
 data the previous leader had (assuming unclean leader election is not
 enabled).

 On Tue, Jul 21, 2015 at 9:11 PM, James Cheng jch...@tivo.com wrote:

 
   On Jul 21, 2015, at 9:15 AM, Ewen Cheslack-Postava e...@confluent.io
  wrote:
  
   On Tue, Jul 21, 2015 at 2:38 AM, Stevo Slavić ssla...@gmail.com
 wrote:
  
   Hello Apache Kafka community,
  
   I find new consumer poll/seek javadoc a bit confusing. Just by reading
  docs
   I'm not sure what the outcome will be, what is expected in following
   scenario:
  
   - kafkaConsumer is instantiated with auto-commit off
   - kafkaConsumer.subscribe(someTopic)
   - kafkaConsumer.position is called for every TopicPartition HLC is
  actively
   subscribed on
  
   and then when doing multiple poll calls in succession (without calling
   commit), does seek have to be called in between poll calls to position
  HLC
   to skip what was read in previous poll, or does HLC keep that state
   (position after poll) in memory, so that next poll (without seek in
  between
   two poll calls) will continue from where last poll stopped?
  
  
   The position is tracked in-memory within the consumer, so as long as
  there
   isn't a consumer rebalance, consumption will just proceed with
 subsequent
   messages (i.e. the behavior I think most people would find intuitive).
   However, if a rebalance occurs (another consumer instance joins the
 group
   or some leave), then a partition may be assigned to an different
 consumer
   instance that has no idea about the current position and will restart
  based
   on the offset reset setting (because attempting to fetch the committed
   offset will fail since no offsets have been committed).
  
 
  Ewen,
 
  What happens if there is a broker failure and a new broker becomes the
  partition leader? Does the high level consumer start listening to the new
  partition leader at the in-memory position, or does it restart based on
  saved offsets?
 
  Thanks,
  -James
 
   -Ewen
  
  
   Could be it's just me not understanding this from javadoc. If not,
 maybe
   javadoc can be improved to make this (even) more obvious.
  
   Kind regards,
   Stevo Slavic.
  
  
  
  
   --
   Thanks,
   Ewen
 
 


 --
 Thanks,
 Ewen



Re: New consumer - poll/seek javadoc confusing, need clarification

2015-07-23 Thread Stevo Slavić
Strange, if after seek I make several poll requests, eventually it will
read/return messages from offset that seek set.

On Thu, Jul 23, 2015 at 11:03 AM, Stevo Slavić ssla...@gmail.com wrote:

 Thanks Ewen for heads up.

 It's great that seek is not needed in between poll when business goes as
 usual.

 In edge case, when my logic detects it needs to go back and reread events
 from given position in history, I use seek. I found out that next poll
 after seek will not respect offset used in seek. It is strange that event
 Consumer.position returns same offset that seek has set for the consumer
 instance, but poll still does not return messages starting from that offset.

 E.g. say there are 5 messages published to a single partition of a single
 topic. Consumer subscribes to that topic partition, with smallest/earliest
 offset reset strategy configured, and consumer.position confirms that the
 consumer is at position 0.
 Then poll is issued and it returns all 5 messages. Logic processes
 messages, detects it needs to go back in history to position 0, it does not
 commit messages but calls seek to 0, position confirms consumer is at
 offset 0. Next poll does not return any messages.

 So seek is not really working what it should do, according to its javadoc:

 /**
  * Overrides the fetch offsets that the consumer will use on the next
 {@link #poll(long) poll(timeout)}. If this API
  * is invoked for the same partition more than once, the latest offset
 will be used on the next poll(). Note that
  * you may lose data if this API is arbitrarily used in the middle of
 consumption, to reset the fetch offsets
  */

 I've checked also, calling seek multiple times does not help to get poll
 to use offset set with last seek.
 Could be something is wrong with poll implementation, making it not
 respect offset set with seek.

 Kind regards,
 Stevo Slavic.


 On Wed, Jul 22, 2015 at 7:47 AM, Ewen Cheslack-Postava e...@confluent.io
 wrote:

 It should just continue consuming using the existing offsets. It'll have
 to
 refresh metadata to pick up the leadership change, but once it does it can
 just pick up where consumption from the previous leader stopped -- all the
 ISRs should have the same data, so the new leader will have all the same
 data the previous leader had (assuming unclean leader election is not
 enabled).

 On Tue, Jul 21, 2015 at 9:11 PM, James Cheng jch...@tivo.com wrote:

 
   On Jul 21, 2015, at 9:15 AM, Ewen Cheslack-Postava e...@confluent.io
 
  wrote:

  
   On Tue, Jul 21, 2015 at 2:38 AM, Stevo Slavić ssla...@gmail.com
 wrote:
  
   Hello Apache Kafka community,
  
   I find new consumer poll/seek javadoc a bit confusing. Just by
 reading
  docs
   I'm not sure what the outcome will be, what is expected in following
   scenario:
  
   - kafkaConsumer is instantiated with auto-commit off
   - kafkaConsumer.subscribe(someTopic)
   - kafkaConsumer.position is called for every TopicPartition HLC is
  actively
   subscribed on
  
   and then when doing multiple poll calls in succession (without
 calling
   commit), does seek have to be called in between poll calls to
 position
  HLC
   to skip what was read in previous poll, or does HLC keep that state
   (position after poll) in memory, so that next poll (without seek in
  between
   two poll calls) will continue from where last poll stopped?
  
  
   The position is tracked in-memory within the consumer, so as long as
  there
   isn't a consumer rebalance, consumption will just proceed with
 subsequent
   messages (i.e. the behavior I think most people would find intuitive).
   However, if a rebalance occurs (another consumer instance joins the
 group
   or some leave), then a partition may be assigned to an different
 consumer
   instance that has no idea about the current position and will restart
  based
   on the offset reset setting (because attempting to fetch the committed
   offset will fail since no offsets have been committed).
  
 
  Ewen,
 
  What happens if there is a broker failure and a new broker becomes the
  partition leader? Does the high level consumer start listening to the
 new
  partition leader at the in-memory position, or does it restart based on
  saved offsets?
 
  Thanks,
  -James
 
   -Ewen
  
  
   Could be it's just me not understanding this from javadoc. If not,
 maybe
   javadoc can be improved to make this (even) more obvious.
  
   Kind regards,
   Stevo Slavic.
  
  
  
  
   --
   Thanks,
   Ewen
 
 


 --
 Thanks,
 Ewen