gharris1727 commented on code in PR #14068:
URL: https://github.com/apache/kafka/pull/14068#discussion_r1286456172


##########
docs/connect.html:
##########
@@ -543,6 +543,67 @@ <h6>ACL requirements</h6>
         </tbody>
     </table>
 
+    <h4><a id="connect_plugindiscovery" href="#connect_plugindiscovery">Plugin 
Discovery</a></h4>
+
+    <p>Plugin discovery is the name for the strategy which the Connect worker 
uses to find plugin classes and make them accessible to configure and run in 
Connectors and Tasks. This is controlled by the <a 
href="#connectconfigs_plugin.discovery"><code>plugin.discovery</code> worker 
configuration</a>, and has a significant impact on worker startup time. 
<code>SERVICE_LOAD</code> is the fastest strategy, but care should be taken to 
verify that plugins are compatible before setting this configuration to 
<code>SERVICE_LOAD</code>.</p>
+
+    <p>Prior to version 3.6, this strategy was not configurable, and behaved 
like the <code>ONLY_SCAN</code> mode which is compatible with all plugins. For 
version 3.6 and later, this mode defaults to <code>HYBRID_WARN</code> which is 
also compatible with all plugins, but logs a warning for all plugins which are 
incompatible with the other modes. For unit-test environments that use the 
<code>EmbeddedConnectCluster</code> this defaults to the 
<code>HYBRID_FAIL</code> strategy, which stops the worker with an error if an 
incompatible plugin is detected. Finally, the <code>SERVICE_LOAD</code> 
strategy will silently hide incompatible plugins and make them unusable.</p>
+
+    <h5><a id="connect_plugindiscovery_compatibility" 
href="#connect_plugindiscovery_compatibility">Verifying Plugin 
Compatibility</a></h5>
+
+    <p>To verify if all of your plugins are compatible, first ensure that you 
are using version 3.6 or later of the Connect runtime. You can then perform one 
of the following checks:</p>
+
+    <ul>
+        <li>Start your worker with the default 
<code>HYBRID_WARN</code>strategy, and WARN logs enabled for the 
<code>org.apache.kafka.connect</code> package. At least one WARN log message 
mentioning the <code>plugin.discovery</code> configuration should be printed. 
This log message will explicitly say that all plugins are compatible, or list 
the incompatible plugins.</li>
+        <li>Start your worker in a test environment with 
<code>HYBRID_FAIL</code>. If all plugins are compatible, startup will succeed. 
If at least one plugin is not compatible the worker will fail to start up, and 
all incompatible plugins will be listed in the exception.</li>
+    </ul>
+
+    <p>If the verification step succeeds, then your current set of installed 
plugins are compatible, and it should be safe to change the 
<code>plugin.discovery</code> configuration to <code>SERVICE_LOAD</code>. If 
you change the set of already-installed plugins, they may no longer be 
compatible, and you should repeat the above verification. If the verification 
fails, you must address the incompatible plugins before using the 
<code>SERVICE_LOAD</code> strategy.</p>
+
+    <h5><a id="connect_plugindiscovery_migrateartifact" 
href="#connect_plugindiscovery_migrateartifact">Operators: Artifact 
Migration</a></h5>
+
+    <p>As an operator of Connect, if you discover incompatible plugins, there 
are multiple ways to try to resolve the incompatibility. They are listed below 
from most to least preferable.</p>
+
+    <ol>
+        <li>Upgrade your incompatible plugins to the latest release version 
from your plugin provider.</li>
+        <li>Contact your plugin provider and request that they migrate the 
plugin to be compatible, following the <a 
href="#connect_plugindiscovery_migratesource">source migration 
instructions</a>, and then upgrade to the migrated version.</li>
+        <li>Migrate the plugin artifacts yourself using the included migration 
script.</li>
+    </ol>
+
+    <p>The migration script is located in 
<code>bin/connect-plugin-path.sh</code> and 
<code>bin\windows\connect-plugin-path.bat</code> of your Kafka installation. 
The script can migrate incompatible plugin artifacts already installed on your 
Connect worker's <code>plugin.path</code> by adding or modifying JAR or 
resource files. This is not suitable for environments using code-signing, as 
this may change the artifacts such that they will fail signature verification. 
View the built-in help with <code>--help</code>.</p>
+
+    <p>To perform a migration, first use the <code>list</code> subcommand to 
get an overview of the plugins available to the script. You must tell the 
script where to find plugins, which can be done with the repeatable 
<code>--worker-config</code>, <code>--plugin-path</code>, and 
<code>--plugin-location</code> arguments. The script will only migrate plugins 
present in the paths specified, so if you add plugins to your worker's 
classpath, then you will need to specify those plugins via one or more 
<code>--plugin-location</code> arguments.</p>

Review Comment:
   > Hmmm... I agree with the reservations about modifying JAR files in-flight. 
Do we have to actually write to those JAR files, though? Could we create/modify 
a service loader manifest on the file system instead of inside a JAR file?
   > I know this is a little inelegant but IMO it's worth considering since it 
would reduce the potential for footguns related to classpath plugins and would 
increase the general utility of the CLI tool.
   
   If we did this, we'd end up with the same problem one iteration later. 
Suppose we were able to find the kafka lib directory, and knew that we were 
running via `kafka-run-class.sh`, so the manifest shim jar would be picked up 
the next time the script is run. Because it's on the classpath, we wouldn't be 
able to mutate it safely, so we could only append manifests by adding new shim 
jars. If we added a bad shim jar, or a user removed a plugin and wanted 
sync-manifests to remove the shim, the script couldn't remove the shim itself 
without undermining it's running AppClassLoader.
   
   We could add some processing in the shell/batch script before or after the 
JVM execution, but maintaining non-trivial migration logic in both 
platform-agnostic shell and windows batch sounds like an opportunity for a lot 
more footguns. I'm concerned enough about copy-pasting batch files without 
being able to test them myself, I don't think I could implement the necessary 
parts in batch in time for the release. And I certainly don't want to maintain 
3 different copies of the same migration logic :)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to