I don't see this error message anywhere in Accumulo code. It looks like it
might be coming from d4m?
On Sat, Apr 20, 2024, 17:07 Geoffry Roberts
wrote:
> v2.1.2
>
> On Sat, 20 Apr 2024 at 13:11, Dave Marion wrote:
>
>>
>>
>> What version of Accumulo are you using?
>>
>>
>>
>> On Apr 20, 2024
The Apache Accumulo project is pleased to announce the release of
Apache Accumulo 1.10.4! Apache Accumulo 1.10.4 is the final bug fix
release of the 1.10 LTM release line. After this release, the 1.10
release line will be considered end-of-life. Future bugs reported
against this version are
The Apache Accumulo project is pleased to announce the
concurrent release of versions 2.1.2 and 3.0.0.
Apache Accumulo 2.1.2 is a bug fix release of the 2.1 LTM
release line. This release includes several critical and
minor bug fixes and performance improvements. See the
release notes linked
Severity: critical
Affected versions:
- Apache Accumulo 2.1.0 before 2.1.1
Description:
Improper Authentication vulnerability in Apache Software Foundation Apache
Accumulo.
This issue affects Apache Accumulo: 2.1.0.
Accumulo 2.1.0 contains a defect in the user authentication process that may
The Apache Accumulo project is pleased to announce the release
of Apache Accumulo 2.1.1! Apache Accumulo 2.1.1 is a bug fix
release of the 2.1 LTM release line. This release includes several
critical and minor bug fixes and performance improvements.
See the release notes linked below for details.
l of them) meant to be long
> lived and serve multiple subsequent requests?
>
> Sent from my iPhone
>
> On Jun 5, 2023, at 15:14, Christopher wrote:
>
>
> In Accumulo code, we do try to reuse thrift connections, but I'm not sure if
> the thrift code itself is reusing
In Accumulo code, we do try to reuse thrift connections, but I'm not sure
if the thrift code itself is reusing TLS sessions or not.
Keep in mind that Accumulo is a distributed system. Many of these
handshakes might be going to different servers in a cluster. Are you able
to tell if that is the
The Apache Accumulo project is pleased to announce the release
of Apache Accumulo 1.10.3! Apache Accumulo 1.10.3 is a bug fix
release of the 1.10 LTM release line. This release includes several
minor bug fixes and performance improvements.
See the release notes linked below for details.
Users of
your
init
On Sat, Mar 11, 2023, 12:13 Bernadette Olean
wrote:
> ok so remove the data in hadoop hdfs first and then rerun accumulo init ?
> hadoop fs -rmr /accumulo from any master node ?
>
> On Sat, Mar 11, 2023 at 12:11 PM Christopher wrote:
>
>> I believe init requir
I believe init requires that the directory not exist, so it shouldn't
delete anything.
On Sat, Mar 11, 2023, 10:23 Bernadette Olean
wrote:
> will accumulo init clear out all the disk space ? also will zookeeper
> automatically delete all the links to the original data ?
>
> --
> Bernadette
We certainly could add that feature. The `init` command is kind of a
special utility, and its command-line argument parsing is probably
older, written before the `-p` option was added to the server
processes' command-line argument parsing. But, if it doesn't work, it
can be made to. In the
Based on the error message, it looks like you might need to configure each
of the Accumulo nodes with the AWS credentials file.
On Fri, Jan 20, 2023, 11:43 Samudrala, Ranganath [USA] via user <
user@accumulo.apache.org> wrote:
> Hello again!
>
>
>
> Next problem I am facing is configuring Minio
The Apache Accumulo project is pleased to announce the release
of Apache Accumulo 2.1.0! Apache Accumulo 2.1.0 contains numerous
features and improvements, and contains over 1200 contributions from
over 50 contributors.
This release includes external compactions, separate scanner thread
pools,
The Apache Accumulo project is pleased to announce the release
of Apache Accumulo 2.1.0! Apache Accumulo 2.1.0 contains numerous
features and improvements, and contains over 1200 contributions from
over 50 contributors.
This release includes external compactions, separate scanner thread
pools,
That's odd. They should be available immediately. Are you using
replication? What kind of scan are you doing? Is it an offline scanner, or
isolated scanner?
On Mon, Oct 31, 2022, 15:41 Jeff Turner wrote:
> any ideas why mutations via thrift proxy server would take 120 to 150
> seconds to show
DFS
> as S3 does not support sync. If you put your WAL in S3, and you have a
> tserver failure, then it’s possible that you will lose data.
>
>
>
> *From:* Christopher
> *Sent:* Wednesday, October 12, 2022 4:12 PM
> *To:* user@accumulo.apache.org
> *Subject:* Re: Accumul
Since S3 became strongly consistent, I think it would probably just work.
But, obviously, we can't make any guarantees, especially about the behavior
of software outside of our control. So, your experience may vary.
On Wed, Oct 12, 2022 at 12:28 PM Josh Clum wrote:
> Hi,
>
> Question on this
This question seems to be a log4j performance/behavior question. You
may have more luck at https://logging.apache.org/log4j for anything
specific to the behavior of log4j.
On Fri, Jul 8, 2022 at 2:44 AM Hart, Andrew via user
wrote:
>
>
>
> I am still trying to track down the cause of the
There are no unadvertised utilities for maintaining replication
entries. The replication feature in Accumulo has not been maintained
since its inclusion, and there are many outstanding bugs such as
these. It is because of these that we intend to mark it as deprecated
in 2.1. I would strongly
The Accumulo file garbage collection mechanism is designed to fail safe to
only delete files it knows are no longer in use. It also tries to do this
with minimal interaction with the hdfs name node (so, no scanning the
entire file system to find files). It's possible that in some
circumstances,
On Wed, Jun 8, 2022 at 2:40 PM Scott Kirklin wrote:
>
> Hello,
>
> I am trying to do graph traversal with a custom Iterator. Simplifying a bit,
> a “node” is a unique row id and edges are represented as an entry where the
> Key.row is the source node and the Key.colQualifier is the target node.
The stack trace implies the problem is occurring in the accumulo-gc
process. Is that the only process where this is occurring, or is that
just a representative example?
Are you using Kerberos to authenticate to Accumulo (client-server, or
server-server with SASL RPC), or only using Kerberos for
Since we're just users of Hadoop Metrics 2, and not its developers,
this might be a question better suited to the Hadoop general mailing
list, to inquire as to how to configure Hadoop Metrics 2 for
Prometheus. It does seem like they have some kind of support for it
Upgrading ZooKeeper will be necessary to work through the issue, but it may
not be sufficient. At the very least, you should upgrade to the latest
version of ZooKeeper 3.4, which I believe is 3.4.14. Updating bugfix/patch
releases should be part of routine maintenance. Upgrading to the next minor
Ed's description is slightly wrong:
Yes, flush is the same as a minor compaction, writing in-memory data
to an RFile. The shell calls this a "flush", because it's a more
intuitive name than "minor compaction". In the technical
documentation, it could be referred to as either, and some of our
Since that error message is coming from the libthrift library, and not
Accumulo code, we would need a lot more context to even begin helping you
troubleshoot it. For example, the complete stack trace that shows the
Accumulo code that called into the Thrift library, would be extremely
helpful.
your
current setup.
>
> I will have to look at each and every property to ensure it makes sense for
> sure..
>
> Thanks
>
> -S
>
> -Original Message-
> From: Christopher
> Sent: Wednesday, March 2, 2022 3:09 PM
> To: accumulo-user
> Subject: Re:
ing them, you can ask here, but I would start by reviewing your
configs against the docs.
>
> Thanks
>
> -S
>
>
>
> From: Christopher
> Sent: Wednesday, March 2, 2022 1:32 PM
> To: accumulo-user
> Subject: [External] Re: accumulo
The replacements message should only appear if you have
instance.volumes.replacements set in your configuration.
On Wed, Mar 2, 2022 at 11:02 AM Ligade, Shailesh [USA]
wrote:
>
> Hello,
>
> I need reboot a tserver with 34k hosted tablets.
>
> I set table.supend.duration to 15 min and stop
I don't think we're doing anything specific to set the permissions, so
they should respect your environment setup. Which version of Accumulo
are you using? And, are you referring to permissions in HDFS, a local
filesystem implementation of DFS, or something else?
If you're referring to files
Thanks for the write up! That's great that you were able to get things
back up and running. I was following your conversation in the Slack
channel. Hopefully, this will help others if they run into something
similar.
Also, just wanted to mention, since you said you were running 1.7.0,
that 1.7.0
Have you considered upgrading to 1.10.2? It includes changes in 1.10.0 that
we released in September 2020 to specifically address slow startups due to
rebalance thrashing on restarts:
https://accumulo.apache.org/release/accumulo-1.10.0/#tserver-startup-and-shutdown-protections
However, I don't
The Apache Accumulo project is pleased to announce the release
of Apache Accumulo 1.10.2! Apache Accumulo 1.10.2 is a bug fix
release of the 1.10 LTM release line. Among other things, it
removes the dependency on log4j 1.2 (using reload4j instead).
See the release notes linked below for details.
The Apache Accumulo project is pleased to announce the release
of Apache Accumulo 1.10.2! Apache Accumulo 1.10.2 is a bug fix
release of the 1.10 LTM release line. Among other things, it
removes the dependency on log4j 1.2 (using reload4j instead).
See the release notes linked below for details.
This might be a transient issue. It looks like a warning, not an
error. Maybe the table state in ZK is not propagating to all tservers
at the same time? Restarting the cluster could help. You probably
didn't need to add WRITE permission to the table. If I remember
correctly, replication should use
You should not rely on the start-daemon.sh script's strategy for
rotating logs in production. That is more or less a hack for scenarios
where you're running "out of the tarball" rather than a properly
integrated and configured deployment. You're right that it only
rotates when that script is run
ry...i was thinking to reduce the heap to 65G
>
> -S
>
> -Original Message-
> From: Christopher
> Sent: Monday, November 22, 2021 7:20 PM
> To: accumulo-user
> Subject: Re: [External] Re: acumulo 1.10.0 tserver goes down under heavy
> ingest
>
> I don't know how
I don't know how to tune the oom killer, but I do wonder why you would
need an 80G Java heap. That seems excessive to me. Are you using the
native map library?
On Mon, Nov 22, 2021 at 7:06 PM Ligade, Shailesh [USA]
wrote:
>
> Thanks Christopher,
>
> It is actually oom killer.
That log message is basically just reporting that the connection to ZK
failed. It's not very helpful in determining what led to that. You'll
probably have to gather additional evidence to track down the problem.
Check the master and tserver logs prior to the crash, as well as the
ZooKeeper logs.
That does seem like a bug. We probably should be scanning between:
+ ';' and + '<' instead of scanning without that
semicolon. Would you be willing to create a new issue or pull request
at https://github.com/apache/accumulo ?
On Wed, Nov 10, 2021 at 10:32 PM McClure, Bruce MR 2
wrote:
>
>
t (can scan tables, monitor is up etc)
>
> I also updated start-all to use systemctl but that did not help..
>
> -S
> -Original Message-
> From: Christopher
> Sent: Monday, November 1, 2021 9:47 AM
> To: accumulo-user
> Subject: [External] Re: accumulo 1.10 stop-all.sh scr
The start-all.sh / stop-all.sh scripts that come with Accumulo 1.10
are just one possible set of out-of-the-box scripts that you could
use. If you have written or acquired systemd unit files to manage your
services, you may be better off using those instead, and avoiding the
built-in scripts
Hi Bruce,
We don't have an API for forcing the balancer to rebalance, but I
believe it automatically runs every couple of minutes. So, it should
get frequent opportunities to rebalance. It shouldn't be necessary to
force a rebalance, if your balancer logic takes into account all the
factors you
That class was never public API. It was internal only. We didn't need it,
since Java 8 provides its own, so it was removed. I recommend using Java
8's built-in or commons-codec's implementation if you need one. Guava might
also have a good implementation for you to use.
On Tue, Oct 26, 2021,
Some of our launch scripts (start-daemon.sh in 1.x and the equivalent
accumulo-service in 2.x) do check the output of `ulimit -n` to see if
it is at least 32768. These scripts are optional, though.
Everybody's situation is unique, so we probably can't offer specific
advice for your environment,
The implementation of the client properties did change a bit, but it
shouldn't have fundamentally changed the behavior from the user
perspective. This might be a bug, or a gap in the documented expected
behavior. Please try to confirm your configuration is correct, and
consider filing a bug report
I'm not aware of any existing prescriptive recommendations of the type
you are asking for. I think most recommendations tend to be reactive
and specific, rather than prescriptive and general, because so much
depends on the particulars of a user's situation. There are too many
variables, and each
The design of replication is intentionally passive and "eventually
consistent" for efficiency. Batching efficiency is one reason why the
feature is tightly coupled to WALs. If you need immediate replication,
or want greater control over the batching process, you can create a
layer on top of two
ailesh Ligade wrote:
>
>> Thank you,
>>
>>
>>
>> Is there way to maintain that data locality, I mean over time with table
>> splitting, hdfs rebalancing etc we may not have data locality…
>>
>>
>>
>> Thanks again
>>
>>
time with table
> splitting, hdfs rebalancing etc we may not have data locality…
>
>
>
> Thanks again
>
>
>
> -S
>
>
>
> *From:* Christopher
> *Sent:* Friday, September 10, 2021 8:40 AM
> *To:* accumulo-user
> *Subject:* [EXTERNAL EMAIL] - Re: accum
Data locality and simplified deployments are the only reasons I can think
of. Accumulo doesn't do anything particularly special for data locality,
but typically, an HDFS client (like Accumulo servers) will (or can be
configured to) write one copy of any new blocks locally, which should
permit
Regarding the weburl, you can go to the monitor's /xml page to see
some server status information that you could parse out.
On Wed, Sep 1, 2021 at 12:30 PM Mike Miller wrote:
>
> > Do we have updated documentation for influxdb 2 and grafana 7+?
>
> There is nothing on the Accumulo website, no.
It depends on what you mean by "master node". If you mean the
"accumulo-master" service in 1.10, it should just be a matter of
killing the service on one node, and starting it on the replacement
node. If you have other processes/services on the same node that you
need to migrate, you may need to
You can disable the entire replication feature by taking the
replication table offline.
(in the shell: `offline accumulo.replication`)
(in Java/jshell:
`clientOrConnector.tableOperations().offline("accumulo.replication");`)
This won't necessarily terminate any background threads/resources
started
Hi,
I looked through the code, and it doesn't look like there's anything
that automatically cleans up these replication servicer advertisement
endpoints when a tserver is removed. You didn't specify a version, but
it looks like this is the case, regardless of which version you're
using. You can
It looks like the examples were updated to work with 2.1.0-SNAPSHOT's
new Constraint SPI, which doesn't exist for 2.0.1. Try checking out
the 2.0 branch in the accumulo-examples, if you're trying to run them
with 2.0.
On Thu, Jul 15, 2021 at 12:03 PM Christine Buss
wrote:
>
>
> And I should
t; I deleted accumulo 2.0.1 and installed accumulo 1.10.1.
> Then edited the conf/ files. I think I didn't do that right before.
> And then it worked.
>
> Gesendet: Freitag, 09. Juli 2021 um 16:30 Uhr
> Von: "Christopher"
> An: "accumulo-user"
> Betreff: R
"Connection refused" could also mean that you're not connecting using
the correct bind address or you have a firewall or something blocking
the connection.
You can see what ports your services are listening on with `sudo netstat -tlnp`
If your machine's hostname is "mymachine" and you're trying
in from the client?
> What's the best way to do that?
>
> On Sun, May 2, 2021 at 10:32 PM Christopher wrote:
>>
>> Iterator parameters are passed as strings, so you have to encode
>> binary data if you need to send that. The limit should be reasonable,
>> but ther
Iterator parameters are passed as strings, so you have to encode
binary data if you need to send that. The limit should be reasonable,
but there's no hard-coded limit. If you are storing options for an
iterator configured on a table, one would expect it to be able to be
small enough to be stored
That article, and these questions, seem to be about AWS support of
Accumulo versions using Amazon's EMR. It might be best to ask Amazon
directly about what they do and don't support, since it's their recipe
and their features. As I understand it, though, EMR is just a
specialized distribution of
ss path, but that's probably more
trouble than the first option.
Hope that helps!
Christopher
On Mon, Jan 25, 2021 at 10:31 PM Christopher wrote:
>
> I could not reproduce this using the precompiled Hadoop 3.2.2 binary
> tarball and Accumulo 2.0.1 using fluo-uno to run it on a sin
common/lib
>
> I don't understand what's missing here, which jars?
>
>
> Gesendet: Dienstag, 19. Januar 2021 um 17:23 Uhr
> Von: "Christopher"
> An: "accumulo-user"
> Betreff: Re: ERROR: Uncaught exception java.lang.NoClassDefFoundError:
> co
This missing class is certainly not a dependency of Accumulo. It could be a
dependency of Hadoop. However, there is no such version as Hadoop 3.6.2, so
it could also be something unique about your environment (a non-standard
version of Hadoop?). As Ed said, you should check your Accumulo class
l distribute it to all tservers, it will
> process in parallel; and I’ll get back as single result set?
>
>
>
> Geoffry Roberts
>
> Lead Technologist
>
> 702.290.9098
>
> roberts_geof...@bah.com
>
>
>
> Booz | Allen | Hamilton
>
> BoozAllen.com
>
A BatchScanner takes multiple ranges, groups them by TServer, and then
queries TServers in parallel for the ranges that are located in each,
returning data in its iterator as it comes back (without regard to order).
If you run the same scan on multiple nodes, the task won't be
sub-divided in any
ption: Connection refused
>
>at sun.nio.ch.Net.connect0(Native Method) ~[?:?]
>
>at sun.nio.ch.Net.connect(Net.java:493) ~[?:?]
>
>at sun.nio.ch.Net.connect(Net.java:482) ~[?:?]
>
>at sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588) ~[?:?]
Is there more to the debug message? A full stack trace perhaps? It might
show the specific IOException causing the problem connecting.
The two most likely causes are DNS/name service causing the server host
name to not be resolved (very common), and the port ( in this case)
being unreachable
I have not had experience with HDInsight. My first thoughts are that if it
provides Hadoop and ZooKeeper for you, then that's a few less things to
worry about from a maintenance perspective for your Accumulo cluster. On
the other hand, if you can't run Accumulo nodes colocated with Hadoop
A custom CompactionStrategy is probably your best bet, I would think, since
you have very specific requirements.
You may also be interested in the work done by Keith Turner for 2.1.0 (not
yet released, as it is still under development) to add more control over
compactions. A preview of the javadoc
It's kind of hard to tell because this code is pretty old and has a
lot of technical debt accrued over time, but it looks like the
original developer wanted to provide a mechanism for a user-pluggable
implementation to cache operations, but we also wanted to ask that
implementation not to use a
Hi Jerome,
You should use Guava 27.0-jre with Accumulo 1.10.0 and Hadoop 3.2.1.
1.9.3 does not support Hadoop 3.2.1, precisely for reasons such as the
Guava issue you encountered.
I fixed the issue you encountered in 1.10.0 in
https://github.com/apache/accumulo/pull/1688
See the release notes
I'm not sure CheckForMetadataProblems can check for all that many different
types of problems. It is limited.
If you have tablets still in the metadata table for tables that no longer
exist, that indicates you probably had some sort of crash and possible
corruption of your metadata.
The only
ut I don't really know the answer to that at
this time. Feel free to submit a pull request with suggested changes.
>
> thanx
>
>
> On Sun, Sep 13, 2020 at 3:32 AM Christopher wrote:
>>
>> Can you provide more information, such as:
>> * version of Accumulo
>&g
Can you provide more information, such as:
* version of Accumulo
* the actual error message you see
* any stack trace corresponding to the error message, in full, if possible
The shell command uses an admin utility that is unable to merge
metadata tables. However, I believe the metadata table can
The Apache Accumulo project is pleased to announce the
release of Apache Accumulo 1.10.0!
Apache Accumulo 1.10.0 is a continuation of our 1.x series,
and contains a few major bug fixes since 1.9.3, and several
performance enhancements and other improvements. This
release is expected to be the
You can kill the accumulo-master service and restart it.
But you should definitely upgrade to the latest 1.x, to get this, as
well as numerous other fixes. It looks like the fix for this one was
included in 1.9.3, according to the GitHub pull request.
On Thu, Aug 6, 2020 at 6:09 AM vLex Systems
The one in core.client is newer, and intended as a stable public API.
The other is internal API, more awkward to use, and not guaranteed to
be stable. If the public API is missing something that would be
useful, we can consider making a non-public method more visible, or
adding new public methods
The `du` command should show in bytes. Keep in mind that Accumulo
compresses data in its files. If the number doesn't match what you see
for the *.rf files in Hadoop, there may be a bug. Please let us know
if you find this to be the case.
On Tue, Apr 14, 2020 at 10:30 PM Niclas Hedhman wrote:
>
44e-f55f-6677889900cc
root@uno t1> egrep '.*itemId: (?\![a-f0-9]{8}(?:-[a-f0-9]{4}){4}[a-f0-9]{8}).*'
egrep '.*itemId: (?![a-f0-9]{8}(?:-[a-f0-9]{4}){4}[a-f0-9]{8}).*'
b b:c []itemId: 11aa22bbd33d2abcav34-11d25d334455
On Thu, Mar 19, 2020 at 8:17 AM Donald Mackert wrote:
>
> Chr
The shell command, egrep, uses the RegExFilter[1] underneath. It
supports Java regular expressions, which does support negative look
ahead. So, it should be possible.
However, it is possible there's some quoting issues... the shell
itself uses backslash to escape, but it also uses JLine to parse
> Στις Τρίτη, 10 Μαρτίου 2020, 10:34:06 μ.μ. EET, ο χρήστης Christopher
> έγραψε:
>
>
> Hi Maria,
>
> I do not know what version of commons-vfs2 works best with Accumulo. I
> know that people have tried various versions, based on their
> individual needs. I do not use the
PM Maria Krommyda wrote:
>
> Hello Christopher,
>
> Sorry if my initial email was not clear enough.
>
> I have an Accumulo datastore connected with my Geoserver.
> I figured the problem because Geoserver had an issue connecting to the
> datastore.
> After figuring
Hi Maria,
I'm not sure this is an Accumulo issue. Accumulo doesn't have a
"Geoserver" service, and should not be affected by anything related to
certbot. This might be a Geomesa-specific issue, which is not a
project maintained by the Accumulo developers (though, there may be
knowledgeable people
more sense.
>>
>> Thanks,
>>
>> On Fri, Feb 28, 2020 at 10:30 PM Christopher wrote:
>>>
>>> The tracing in Accumulo is instrumented using HTrace. You can
>>> configure any HTrace sink for collecting the trace information. The
>>>
The tracing in Accumulo is instrumented using HTrace. You can
configure any HTrace sink for collecting the trace information. The
built-in one that writes to an Accumulo table (called
"ZooTraceClient") is the default, but you can easily change this by
editing the configuration property for
The code that backs the `bin/accumulo rfile-info
hdfs://path/to/rfile.rf` command is located at
https://github.com/apache/accumulo/blob/3fd5cad92f9b63ac19e4466f3f2d5237b905262c/core/src/main/java/org/apache/accumulo/core/file/rfile/PrintInfo.java
It may be a useful example of how to read
I don't know of a workaround for this, but if, as you say, we've
already fixed the bug upstream, then the best thing you can do is
update to the newer version. Since you get your Accumulo from a
vendor, and they don't have an update, you might be able to backport
the fix yourself, if they provide
My understanding is that CFB mode may not be suitable for write-ahead
logs. Perhaps CBC is better? I'm not sure. For what it's worth, a lot
of work was done in Accumulo 2.0 to help support configuring separate
modes for RFiles and WALs
Hello Parmesh.
Your error appears to be an unrelated separate issue from the one in
this thread. Please start a new thread and provide more details about
the error you are seeing (the version of Accumulo you are using, and a
stack trace, a description of how to reproduce the problem, your
"Are you ready for #Hacktoberfest 2019? Support Open Source in October
and earn a limited edition T-shirt! @DigitalOcean @ThePracticalDev
https://hacktoberfest.digitalocean.com/ "
The above line is the canned promo tweet from the Hacktoberfest
website, but it summarizes the event better than I
I merged in some of the outstanding pull requests, and made some other
small updates to get it building again. It now builds, but the ITs
aren't passing (haven't investigated yet) and it will need some
testing.
There's definitely issues with it using non-public API from Accumulo
still. That will
;
> the disks looked good and we have plenty of space; dont know what happened???
>
> thanx
>
>
>
> On Wed, Sep 4, 2019 at 5:37 PM Christopher wrote:
>>
>> I can't match those line numbers up exactly. What version are you running?
>>
>> Regardless, a zero
I can't match those line numbers up exactly. What version are you running?
Regardless, a zero-length RFile is not a valid RFile. It looks like it
is trying to read the meta information from the RFile to initialize
the file reader object.
You will need to copy over empty RFiles to replace the
-address
> localhost:8020
>
>
> After bouncing Hadoop, Accumulo initialized and then was able to start.
>
> On 8/30/19, 3:41 PM, "Christopher" wrote:
>
> On Fri, Aug 30, 2019 at 3:11 PM Roberts, Geoffry [USA]
> wrote:
> >
> &g
On Fri, Aug 30, 2019 at 3:11 PM Roberts, Geoffry [USA]
wrote:
>
> I am following the quickstart instructions. When I get to accumulo init I
> get a ClassNotFoundException on org.apache.zookeeper.KeeperException. I have
> ZOOKEEPER_HOME set in accumulo-env.sh and it is correct. Both Zookeeper
On Thu, Aug 29, 2019 at 2:51 PM Noah Randolph wrote:
>
> Hi Christopher,
>
> Can you clarify the "multi user aspect?" Is it the ability to login as
> multiple users through one client connection, or is it the ability to have
> multiple users in general (beyond ju
On Thu, Aug 29, 2019 at 2:03 PM Josh Elser wrote:
>
> Architectural interest in the feature/product is a very different thing
> than interest in maintaining and testing said product ;)
>
> I read Christopher's reply in a "there isn't interest in contributing
> more to the service by the current,
It was separated out so it could live or die on its own, as the case may
be. It hasn't received much development attention in awhile.
A new version hasn't yet been released from this new repo. If there is
still interest, somebody will need to devote some resources to attend to
it. One thing I'd
I've responded to your "Leet" issue
(https://github.com/apache/accumulo/issues/1337)
;)
On Mon, Aug 26, 2019 at 9:35 PM wrote:
>
> IIRC the ‘service.name’ property in the metrics properties file is used to
> note from which service the metric was emitted. I would suggest testing this
> by
1 - 100 of 367 matches
Mail list logo