Hi Matthew,

While I don't know what is causing your specific problem, the cross-data
center replication feature has had some outstanding bugs since its
inception, and its lack of any active maintainers supporting that feature
(along with its substantial complexity that entangled it to our write-ahead
log implementation, which led to difficulty maintaining write-ahead log
capability) led us to deprecate it as of 2.1.0, just over 4 years ago, and
removed it in 3.0.0, though it may have already been broken prior to 2.1.0
(it's not a feature we get feedback on very often). You should be seeing
warnings about the use of its properties in your logs, and compiler
warnings using any replication-related APIs. The website properties
documentation also describes the replication properties as deprecated.

While it may be possible that there is a simple fix to the issue you are
seeing, I don't know what it is. I would strongly advise against attempting
to use this deprecated feature. Since 2.1 is an LTM version, we are still
maintaining 2.1 by publishing new releases, so we will still accept patches
for 2.1 to fix replication issues, if necessary to fix your issue, but to
my knowledge, there are currently no active maintainers who have knowledge
or interest in maintaining the feature.

Depending on your situation, there may be other viable alternatives. One
option that some people may find helpful is the import/export table feature
to perform incremental backups. Another option might be to load separate
Accumulo instances with an ETL pipeline using something like Apache Kafka
or Apache NiFi. These alternatives tend to give you more control over where
and how your data is loaded, and may result in a simpler and more reliable
system architecture overall.

It appears we didn't mention the deprecation in the 2.1.0 release notes or
in the 2.x docs on the website, but did describe its removal, and reason
for it, in 3.0.0's release notes.

I realize this is probably not the response you hoped for, but I hope that
it is at least helpfully informative.

Kind regards,
Christopher

On Thu, Jan 15, 2026 at 11:30 PM Matthew Austin <[email protected]>
wrote:

> Greetings,
>
> I've got a development system and as I was working through setting up
> replication using steps outlined here
> https://accumulo.apache.org/docs/2.x/administration/replication
>
> I'm getting this error every 30 secs.
>
> 2026-01-16T04:25:15,960 [replication.WorkDriver] ERROR: Error while
> assigning work
> java.lang.NullPointerException: null
> at
> org.apache.accumulo.manager.replication.DistributedWorkQueueWorkAssigner.initializeWorkQueue(DistributedWorkQueueWorkAssigner.java:95)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.manager.replication.DistributedWorkQueueWorkAssigner.assignWork(DistributedWorkQueueWorkAssigner.java:107)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.manager.replication.WorkDriver.run(WorkDriver.java:85)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.core.trace.TraceWrappedRunnable.run(TraceWrappedRunnable.java:52)
> ~[accumulo-core-2.1.3.jar:2.1.3]
> at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
> 2026-01-16T04:25:45,960 [replication.WorkDriver] ERROR: Error while
> assigning work
> java.lang.NullPointerException: null
> at
> org.apache.accumulo.manager.replication.DistributedWorkQueueWorkAssigner.initializeWorkQueue(DistributedWorkQueueWorkAssigner.java:95)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.manager.replication.DistributedWorkQueueWorkAssigner.assignWork(DistributedWorkQueueWorkAssigner.java:107)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.manager.replication.WorkDriver.run(WorkDriver.java:85)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.core.trace.TraceWrappedRunnable.run(TraceWrappedRunnable.java:52)
> ~[accumulo-core-2.1.3.jar:2.1.3]
> at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
> 2026-01-16T04:26:15,960 [replication.WorkDriver] ERROR: Error while
> assigning work
> java.lang.NullPointerException: null
> at
> org.apache.accumulo.manager.replication.DistributedWorkQueueWorkAssigner.initializeWorkQueue(DistributedWorkQueueWorkAssigner.java:95)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.manager.replication.DistributedWorkQueueWorkAssigner.assignWork(DistributedWorkQueueWorkAssigner.java:107)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.manager.replication.WorkDriver.run(WorkDriver.java:85)
> ~[accumulo-manager-2.1.3.jar:2.1.3]
> at
> org.apache.accumulo.core.trace.TraceWrappedRunnable.run(TraceWrappedRunnable.java:52)
> ~[accumulo-core-2.1.3.jar:2.1.3]
> at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
>
> I tried going in reverse and restarting things, but the error is still
> occurring. I have 2 tablets/hdfs datanodes, and 1 zookeeper for my little
> dev stack. Accumulo is otherwise functioning.
>
> Thanks for any help.
>
> *Matthew M. Austin*
>
> *Sr. DevSecOps Engineer*
>
> *DSA, Inc.*
>
> 8 Neshaminy Interplex Drive| Suite 209 | Trevose, PA 19053
>
> *www.dsainc.com <http://www.dsainc.com/>* | *[email protected]
> <[email protected]>*
>
>
> *Culture makes a difference, learn about ours **The DSA Way** here: 
> **https://www.dsainc.com/the-dsa-way
> <https://url.emailprotection.link/?b8rNrM4MTPp9c5teJIa0ud060nquUWSdKmf-VkrYQbKNOmgfv-H3v1n4barDHp3uSr2rXffJyixmgWTH9KNbARg~~>*
>
>
>
>
>
> * <http://www.dsainc.com/index.html>*
>
> *Employee-Owned, Mission-Driven*
>
> Note: This email and any files transmitted with it are the property of
> Data Systems Analysts, Inc. and are privileged, confidential, and protected
> from disclosure. If the reader of this message is not the intended
> recipient, or an employee or agent responsible for delivering this message
> to the intended recipient, you are hereby notified that any dissemination,
> distribution, or copying of this communication is strictly prohibited. If
> you have received this communication in error, please notify the sender
> immediately by replying to the message and promptly delete it from your
> system*.*
>
>

Reply via email to