when I ran test suite on hadoop
> 2 using Linux and JDK 7.
>
> Here is the command I used:
>
> ~/apache-maven-3.0.4/bin/mvn test -Dhadoop.profile=2.0
>
> Here is related portion from jstack:
>
> Thread 1895: (state = IN_NATIVE)
> - java.io.FileInputStream.readBytes(b
Hi,
Some tests, e.g. FormatterCommandTest, hung when I ran test suite on hadoop
2 using Linux and JDK 7.
Here is the command I used:
~/apache-maven-3.0.4/bin/mvn test -Dhadoop.profile=2.0
Here is related portion from jstack:
Thread 1895: (state = IN_NATIVE)
- java.io.FileInputStream.readBytes
rote:
Hi,
I noticed that, in trunk, the hadoop.version for hadoop-1.0 profile is
1.0.4
The recent stable release was 1.2.1
Should hadoop version be upgraded ?
Thanks
gt; >>
> >> --
> >> Christopher L Tubbs II
> >> http://gravatar.com/ctubbsii
> >>
> >>
> >> On Wed, Aug 14, 2013 at 12:12 PM, Ted Yu wrote:
> >> > Hi,
> >> > I noticed that, in trunk, the hadoop.version for hadoop-1.0 profile is
> >> 1.0.4
> >> >
> >> > The recent stable release was 1.2.1
> >> >
> >> > Should hadoop version be upgraded ?
> >> >
> >> > Thanks
> >>
> >
> >
>
;> considerations to make about losing compatibility elsewhere.
>>
>> --
>> Christopher L Tubbs II
>> http://gravatar.com/ctubbsii
>>
>>
>> On Wed, Aug 14, 2013 at 12:12 PM, Ted Yu wrote:
>> > Hi,
>> > I noticed that, in trunk, the hadoop.version
ons or serious
> considerations to make about losing compatibility elsewhere.
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>
> On Wed, Aug 14, 2013 at 12:12 PM, Ted Yu wrote:
> > Hi,
> > I noticed that, in trunk, the hadoop.version for hadoop-1.0 profil
ug 14, 2013 at 12:12 PM, Ted Yu wrote:
> Hi,
> I noticed that, in trunk, the hadoop.version for hadoop-1.0 profile is 1.0.4
>
> The recent stable release was 1.2.1
>
> Should hadoop version be upgraded ?
>
> Thanks
Hi,
I noticed that, in trunk, the hadoop.version for hadoop-1.0 profile is 1.0.4
The recent stable release was 1.2.1
Should hadoop version be upgraded ?
Thanks
iginal Message-
> > From: Joey Echeverria [mailto:j...@cloudera.com]
> > Sent: Monday, July 29, 2013 1:24 PM
> > To: dev@accumulo.apache.org
> > Subject: Re: Hadoop 2.0 Support for Accumulo 1.4 Branch
> >
> > We're testing this today. I'll report back
tibility (0.20, 1.0,
>> >> 2.0).
>> >>
>> >> -Joey
>> >>
>> >> On Thu, Aug 1, 2013 at 7:33 PM, Dave Marion
>> wrote:
>> >>> Any update?
>> >>>
>> >>> -Original Message-
>> >
20.2 just based on changes in dependencies. We're looking right now
> >> to see how hard it is to have three way compatibility (0.20, 1.0,
> >> 2.0).
> >>
> >> -Joey
> >>
> >> On Thu, Aug 1, 2013 at 7:33 PM, Dave Marion
> wrote:
> >>&
7:33 PM, Dave Marion wrote:
>>> Any update?
>>>
>>> -Original Message-
>>> From: Joey Echeverria [mailto:j...@cloudera.com]
>>> Sent: Monday, July 29, 2013 1:24 PM
>>> To: dev@accumulo.apache.org
>>> Subject: Re: Hadoop 2.0 Support for Acc
Joey
>
> On Thu, Aug 1, 2013 at 7:33 PM, Dave Marion wrote:
>> Any update?
>>
>> -Original Message-
>> From: Joey Echeverria [mailto:j...@cloudera.com]
>> Sent: Monday, July 29, 2013 1:24 PM
>> To: dev@accumulo.apache.org
>> Subject: Re: Hadoop 2.
at 7:33 PM, Dave Marion wrote:
> Any update?
>
> -Original Message-
> From: Joey Echeverria [mailto:j...@cloudera.com]
> Sent: Monday, July 29, 2013 1:24 PM
> To: dev@accumulo.apache.org
> Subject: Re: Hadoop 2.0 Support for Accumulo 1.4 Branch
>
> We're te
Any update?
-Original Message-
From: Joey Echeverria [mailto:j...@cloudera.com]
Sent: Monday, July 29, 2013 1:24 PM
To: dev@accumulo.apache.org
Subject: Re: Hadoop 2.0 Support for Accumulo 1.4 Branch
We're testing this today. I'll report back what we find.
-Joey
—
Sent fr
om: "Billie Rinaldi"
> To: dev@accumulo.apache.org
> Sent: Friday, July 26, 2013 3:02:41 PM
> Subject: Re: Hadoop 2.0 Support for Accumulo 1.4 Branch
> On Fri, Jul 26, 2013 at 11:33 AM, Joey Echeverria wrote:
>> > If these patches are going to be included with 1.4.4
"Will 1.4 still work with 0.20 with these patches?"
Great point Billie.
- Original Message -
From: "Billie Rinaldi"
To: dev@accumulo.apache.org
Sent: Friday, July 26, 2013 3:02:41 PM
Subject: Re: Hadoop 2.0 Support for Accumulo 1.4 Branch
On Fri, Jul
age if a node in
the graph gets stuck. It will also log a message when it gets unstuck.
>
> > Great. I think this would be a good patch for 1.4. I assume that if a
> > user stays with Hadoop 1 there are no dependency changes?
>
> Yup. It works the same way as 1.5 where all of the dependency changes
> are in a Hadoop 2.0 profile.
>
> -Joey
>
t make any promises.
>
> Sure thing. Is there already a write-up on running this full battery
> of tests? I have a 10 node cluster that I can use for this.
>
>
> > Great. I think this would be a good patch for 1.4. I assume that if a
> > user stays with Hadoop 1 there are
r that I can use for this.
> Great. I think this would be a good patch for 1.4. I assume that if a
> user stays with Hadoop 1 there are no dependency changes?
Yup. It works the same way as 1.5 where all of the dependency changes
are in a Hadoop 2.0 profile.
-Joey
d 1.4.x clients with our release of the server daemons.
>
Great. I think this would be a good patch for 1.4. I assume that if a
user stays with Hadoop 1 there are no dependency changes?
>
> -Joey
>
>
> On Fri, Jul 26, 2013 at 11:45 AM, Keith Turner wrote:
>
> >
e? It would be nice to run accumulo's full test
> suite against 1.4.3+CDH4.
>
> Are there any Accumulo API changes or Accumulo behavior changes?
>
>
> > I believe this would violate the previously agreed upon rule of no
> feature
> > back ports to 1.4.3, depen
am?
>
What testing has been done? It would be nice to run accumulo's full test
suite against 1.4.3+CDH4.
Are there any Accumulo API changes or Accumulo behavior changes?
> I believe this would violate the previously agreed upon rule of no feature
> back ports to 1.4.3
+1
On 7/26/13 11:25 AM, Eric Newton wrote:
"My question is if the community would be interested in us pulling those
back ports upstream?"
Yes, please.
"My question is if the community would be interested in us pulling those
back ports upstream?"
Yes, please.
ng on how we "label" support for Hadoop 2.0.
Thoughts?
-Joey
"Why not just use the hadoop classspath generated by running `hadoop
classpath`"
I like it!
+1
On Tue, Jul 9, 2013 at 11:33 PM, Jonathan Hsieh wrote:
> tl;dr
> Ideally the generation of hadoop+accumulo's classpath should only be done
> in one place. At least for all
tl;dr
Ideally the generation of hadoop+accumulo's classpath should only be done
in one place. At least for all versions of hadoop i've seen in the 5
years, there is one place to get hadoop's classpath (the `hadoop classpath`
command). Why not use it?
For the hadoop,
it necessary, does it?
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>
> On Tue, May 21, 2013 at 12:00 PM, Adam Fuchs wrote:
> > We still have the option of putting out a separate build for 1.5.0
> > compatibility with hadoop 2. Should we vote on that relea
ild for 1.5.0
> compatibility with hadoop 2. Should we vote on that release separately?
> Seems like it should be easy to add more binary packages that correspond to
> the same source release, even after the initial vote.
>
> Adam
>
>
>
> On Tue, May 21, 2013 at 11:55 AM, Keith Tur
I agree that 1.5.1 is a reasonable target, if it's fixed at all (we
don't have Counters in our input/output formats, so the problem would
be exclusive to tests/examples anyway). If hadoop compat is a 1.5
feature, this is a minor bug.
--
Christopher L Tubbs II
http://gravatar.com/ctubbsi
We still have the option of putting out a separate build for 1.5.0
compatibility with hadoop 2. Should we vote on that release separately?
Seems like it should be easy to add more binary packages that correspond to
the same source release, even after the initial vote.
Adam
On Tue, May 21, 2013
I'm testing a fix, but I'm not for holding up the release for this.
First, calling a method by reflection is quite a bit slower, so even if we
fix it, it might not be appropriate.
On Tue, May 21, 2013 at 11:49 AM, John Vines wrote:
> Is this something else we can resolve via reflection or ar
Could fix it in 1.5.1
I am starting to think that hadoop compat was so important, it should have
been mostly completed before the feature freeze.
>
> -Eric
>
Is this something else we can resolve via reflection or are we back to
square 1?
On Tue, May 21, 2013 at 11:02 AM, Eric Newton wrote:
> Ugh. While running the continuous ingest verify, yarn spit this out:
>
> Error: Found interface org.apache.hadoop.mapreduce.Counter, but class was
> expected
Ugh. While running the continuous ingest verify, yarn spit this out:
Error: Found interface org.apache.hadoop.mapreduce.Counter, but class was
expected
This is preventing the reduce step from completing.
-Eric
I also just snuck in that Hadoop 1/2 compatibility fix with JobContext
(ACCUMULO-1421). Not sure if that's the only change needed, but it should
be a step forward.
Adam
On Thu, May 16, 2013 at 11:23 AM, Eric Newton wrote:
> I've snuck some necessary changes in... doing integrati
, 2013 at 5:31 PM, Adam Fuchs wrote:
> > > It seems like the ideal option would be to have one binary build that
> > > determines Hadoop version and switches appropriately at runtime. Has
> > anyone
> > > attempted to do this yet, and do
gravatar.com/ctubbsii
>
>
> On Wed, May 15, 2013 at 5:31 PM, Adam Fuchs wrote:
> > It seems like the ideal option would be to have one binary build that
> > determines Hadoop version and switches appropriately at runtime. Has
> anyone
> > attempted to do this yet, an
er L Tubbs II
http://gravatar.com/ctubbsii
On Wed, May 15, 2013 at 5:31 PM, Adam Fuchs wrote:
> It seems like the ideal option would be to have one binary build that
> determines Hadoop version and switches appropriately at runtime. Has anyone
> attempted to do this yet, and do we hav
It seems like the ideal option would be to have one binary build that
determines Hadoop version and switches appropriately at runtime. Has anyone
attempted to do this yet, and do we have an enumeration of the places in
Accumulo code where the incompatibilities show up?
One of the
.
>> >
>> > Sent from my phone, please pardon the typos and brevity.
>> > On May 14, 2013 6:09 PM, "Keith Turner" wrote:
>> >
>> >> One note about option 4. When using 1.4 users have to include hadoop
>> core
>> >> as a depe
;
> > Sent from my phone, please pardon the typos and brevity.
> > On May 14, 2013 6:09 PM, "Keith Turner" wrote:
> >
> >> One note about option 4. When using 1.4 users have to include hadoop
> core
> >> as a dependency in their pom. This must
e them included.
>
> Sent from my phone, please pardon the typos and brevity.
> On May 14, 2013 6:09 PM, "Keith Turner" wrote:
>
>> One note about option 4. When using 1.4 users have to include hadoop core
>> as a dependency in their pom. This must be done bec
You can have maven generate a file with the classpath dependencies and also
make a shaded jar. I use the classpath file for normal java processes and
the shaded jar file with 'hadoop jar'.
On Tue, May 14, 2013 at 6:14 PM, John Vines wrote:
> On that note, I was wondering if t
x27;ve earned an 'us'
> recently around here.)
>
> First, I note that 'Apache releases are source releases'. So, one
> resort of scoundrels here would be to support only one hadoop in the
> convenience binaries that get pushed to Maven Central, and let other
> h
pom-per-classifier.
Where does this leave you/us? (I'm not sure that I've earned an 'us'
recently around here.)
First, I note that 'Apache releases are source releases'. So, one
resort of scoundrels here would be to support only one hadoop in the
convenience binaries th
y much sense to me to have two different GAV's
> >>> for the very same .class files, just to get different dependencies in
> >>> the poms. However, if someone really wanted that, I'd look to make
> >>> some scripting that created this downstream
ote:
>>> I just doesn't make very much sense to me to have two different GAV's
>>> for the very same .class files, just to get different dependencies in
>>> the poms. However, if someone really wanted that, I'd look to make
>>> some scripting that creat
We've written the code such that it works in either, and then we have
profiles which set the hadoop.version for convenience. The profiles also
alternate between using hadoop-client and hadoop-core, but as I mentioned
above, that is unnecessary.
Sent from my phone, please pardon the typo
> some scripting that created this downstream from the main build.
>
>
> On Tue, May 14, 2013 at 6:16 PM, John Vines wrote:
> > They're the same currently. I was requesting separate gavs for hadoop 2.
> > It's been on the mailing list and jira.
> >
> >
that, I'd look to make
>> some scripting that created this downstream from the main build.
>>
>>
>> On Tue, May 14, 2013 at 6:16 PM, John Vines wrote:
>>> They're the same currently. I was requesting separate gavs for hadoop 2.
>>> It's been on the ma
6 PM, John Vines wrote:
>> They're the same currently. I was requesting separate gavs for hadoop 2.
>> It's been on the mailing list and jira.
>>
>> Sent from my phone, please pardon the typos and brevity.
>> On May 14, 2013 6:14 PM, "Keith Turner&q
I'm not sure what the "best" solution would be, but I'd easily assume
any worthwhile solution would extend the 1.5.0 release date even farther
than I'd be happy about. So, by that stance, I'm for #4 or another quick
fix, even if it does perpetuate some sort of "hack".
On 05/14/2013 07:09 PM, B
e able to transitively resolve
> any dependencies defined in profiles. This has significant
> implications to user code that depends on Accumulo Maven artifacts.
> Every user will essentially have to explicitly add Hadoop dependencies
> for every Accumulo artifact that has dependenc
May 14, 2013 at 6:16 PM, John Vines wrote:
> They're the same currently. I was requesting separate gavs for hadoop 2.
> It's been on the mailing list and jira.
>
> Sent from my phone, please pardon the typos and brevity.
> On May 14, 2013 6:14 PM, "Keith Turner" w
They're the same currently. I was requesting separate gavs for hadoop 2.
It's been on the mailing list and jira.
Sent from my phone, please pardon the typos and brevity.
On May 14, 2013 6:14 PM, "Keith Turner" wrote:
> On Tue, May 14, 2013 at 5:51 PM, Benson Margulie
han things provided by either Hadoop
1 or 2?
On Tue, May 14, 2013 at 6:08 PM, Keith Turner wrote:
> One note about option 4. When using 1.4 users have to include hadoop core
> as a dependency in their pom. This must be done because the 1.4 Accumulo
> pom marks hadoop-core as provided.
e same GAV, you have chaos.
>
What GAV are we currently producing for hadoop 1 and hadoop 2?
>
> If you have different profiles that test against different versions of
> dependencies, but all deliver the same byte code at the end of the
> day, you don't have chaos.
>
>
&g
dependencies and make them included.
Sent from my phone, please pardon the typos and brevity.
On May 14, 2013 6:09 PM, "Keith Turner" wrote:
> One note about option 4. When using 1.4 users have to include hadoop core
> as a dependency in their pom. This must be done because the 1.
One note about option 4. When using 1.4 users have to include hadoop core
as a dependency in their pom. This must be done because the 1.4 Accumulo
pom marks hadoop-core as provided. So maybe option 4 is ok if the deps in
the profile are provided?
On Tue, May 14, 2013 at 4:40 PM, Christopher
You're so quick to dismiss hadoop 2,but you really need to keep in mind how
pervasive it is. Even from our own software we can see how much people love
to run off of trunk, let alpha releases. But then one of the most popular
distributions, cdh, is more in line with it as well. Something to
We can easily fix the break it the hadoop dependencies by making the switch
to hadoop-client and relying on hadoop.version to set/override the version.
The hadoop 2 profile is just needed to bring in additional dependencies and
possibly setting the hadoop version for convenience.
Sent from my
licitly advised against by
> the Maven developers (from the information I've read). I can see its
> appeal, but I really don't think that we should introduce an explicit
> problem for users (that applies to users using even the Hadoop version
> we directly build against... no
it
problem for users (that applies to users using even the Hadoop version
we directly build against... not just those using Hadoop 2... I don't
know if that point was clear), to only partially support a version of
Hadoop that is still alpha and has never had a stable release.
BTW, Option 4 was
I think Option 2 is the best solution for "waiting until we have the
time to solve the problem correctly", as it ensures that transitive
dependencies work for the stable version of Hadoop, and using Hadoop2
is a very simple documentation issue for how to apply the patch and
rebuild
Yes, they should add a dependency on Hadoop, if they use it. The
problem isn't just if they use Hadoop classes, though. It is that the
dependency is required for any code path where Accumulo requires
Hadoop... and this is unknown to the user, because the dependency tree
looks like Accumulo h
I tend to agree with Sean, John, and Benson. Option 4 works for now, and
until we can define something that works better (e.g. runtime compatibility
with both hadoop 1 and 2 using reflection and crazy class loaders) we
should not delay the release. Good docs are always helpful where
engineering is
ed against Hadoop2, neither our Hadoop1
>> binaries or our Hadoop2 binaries will be able to transitively resolve
>> any dependencies defined in profiles. This has significant
>> implications to user code that depends on Accumulo Maven artifacts.
>> Every user will essential
, neither our Hadoop1
> binaries or our Hadoop2 binaries will be able to transitively resolve
> any dependencies defined in profiles. This has significant
> implications to user code that depends on Accumulo Maven artifacts.
> Every user will essentially have to explicitly add Had
If a user is referencing any of the Hadoop classes, aren't they supposed to
add a dependency on the appropriate Hadoop artifact anyways?
FWIW, option 4 is what Avro does. Their discussion:
https://issues.apache.org/jira/browse/AVRO-1170
On Tue, May 14, 2013 at 4:40 PM, Christopher
ompiled against Hadoop2, neither our Hadoop1
binaries or our Hadoop2 binaries will be able to transitively resolve
any dependencies defined in profiles. This has significant
implications to user code that depends on Accumulo Maven artifacts.
Every user will essentially have to explicitly add Hadoop d
Hello Accumulators,
I'm interested in having an Accumulo birds of a feather session on June
25th, the day before the Hadoop Summit in San Jose (
http://hadoopsummit.org/san-jose). If you're planning on attending the
conference, consider coming a day early to discuss Accumulo! Also, fe
ilds for now. ubuntu2, which I removed for the same reason (same
>> error)
>> > a few weeks ago, may be working again so I'm experimentally adding it
>> back
>> > in.
>> >
>> > Billie
>> >
>> >
>> > On Tue, Apr 2, 2013 at 8:02 AM, A
r the same reason (same
> error)
> > a few weeks ago, may be working again so I'm experimentally adding it
> back
> > in.
> >
> > Billie
> >
> >
> > On Tue, Apr 2, 2013 at 8:02 AM, Apache Jenkins Server <
> > jenk...@builds.apache.org> wrote:
>
t; >
> > On Tue, Apr 2, 2013 at 8:02 AM, Apache Jenkins Server <
> > jenk...@builds.apache.org> wrote:
> >
> >> See <https://builds.apache.org/job/Accumulo-Trunk-Hadoop-2.0/168/>
> >>
> >> ------
> >> Fa
in so I'm experimentally adding it back
> in.
>
> Billie
>
>
> On Tue, Apr 2, 2013 at 8:02 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> See <https://builds.apache.org/job/Accumulo-Trunk-Hadoop-2.0/168/>
>>
ver <
jenk...@builds.apache.org> wrote:
> See <https://builds.apache.org/job/Accumulo-Trunk-Hadoop-2.0/168/>
>
> --
> Failed to access build log
>
> hudson.util.IOException2: remote file operation failed:
> /home/jenkins/jenkin
t; On Tue, Mar 26, 2013 at 10:47 PM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> See <https://builds.apache.org/job/Accumulo-Trunk-Hadoop-2.0/161/changes>
>>
>>
I'm going to disable the testParallelWriteSpeed test: we have too much
variability between platforms to make this test reliable.
-Eric
On Tue, Mar 26, 2013 at 10:47 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:
> See <https://builds.apache.org/job/Accumulo-Tru
>> The Hadoop Summit is coming up in San Jose this summer (
>> http://hadoopsummit.org/san-jose/ ), and they just released abstracts
>> for a Community Choice vote. Community voting plays a role in what
>> abstracts are selected to be presented at the conference. From what I
On Tue, Mar 5, 2013 at 6:37 AM, Jim Klucar wrote:
> The Hadoop Summit is coming up in San Jose this summer (
> http://hadoopsummit.org/san-jose/ ), and they just released abstracts for
> a Community Choice vote. Community voting plays a role in what abstracts
> are selected to be
The Hadoop Summit is coming up in San Jose this summer (
http://hadoopsummit.org/san-jose/ ), and they just released abstracts for a
Community Choice vote. Community voting plays a role in what abstracts are
selected to be presented at the conference. From what I saw, there are two
Accumulo
We intend Accumulo 1.5.0 to be compatible with Hadoop 2.0 and possibly also
0.23.
Billie
On Jan 21, 2013 1:18 PM, wrote:
> All,
>
> Has there been progress on Accumulo compatibility with hadoop 2.0 and 0.23?
>
> 2.0 support: https://issues.apache.org/jira/browse/ACCUMULO-804
All,
Has there been progress on Accumulo compatibility with hadoop 2.0 and 0.23?
2.0 support: https://issues.apache.org/jira/browse/ACCUMULO-804
0.23 support: https://issues.apache.org/jira/browse/ACCUMULO-564
Thanks,
Scott
Given the work that Billie just committed to allow the user to set the
version of Hadoop (and ZooKeeper) being compiled against (ACCUMULO-876),
I think the best solution would be to substitute the Hadoop version
specific commands into the scripts at build time.
That is, of course, assuming
Should we add an hadoop version check to the accumulo script?
On Fri, Dec 14, 2012 at 7:45 AM, Jason Trost wrote:
> We saw the same issue recently. We upgraded our dev nodes to hadoop 1.1.1
> and it fixed this issue. I'm not sure when class path was added to the
> hadoop comm
We saw the same issue recently. We upgraded our dev nodes to hadoop 1.1.1
and it fixed this issue. I'm not sure when class path was added to the
hadoop command so a minor upgrade may work too.
--Jason
sent from my DROID
On Dec 14, 2012 7:34 AM, "David Medinets" wrote:
> It lo
It looks to me like the change of Nov 21, 2012 added the 'hadoop
classpath' call to the accumulo script.
ACCUMULO-708 initial implementation of VFS class loader …
git-svn-id: https://svn.apache.org/repos/asf/accumulo/trunk@1412398
13f79535-47bb-0310-9956-ffa450edef68
Dave Marion a
I didn't think hadoop had a classpath argument, just Accumulo.
Sent from my phone, please pardon the typos and brevity.
On Dec 13, 2012 10:43 PM, "David Medinets" wrote:
> I am at a loss to explain what I am seeing. I have installed Accumulo
> many times without a hitch. Bu
I am at a loss to explain what I am seeing. I have installed Accumulo
many times without a hitch. But today, I am running into a problem
getting the hadoop classpath.
$ /usr/local/hadoop/bin/hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
namenode -format format the
e:
>
> > I believe I'm running Accumulo-1.4.2 with Hadoop-1.0.4 (which is a very
> > minor release above 1.0.3 if memory serves) at home.
> >
> > - Josh
> >
> >
> > On 11/30/12 9:56 AM, dlmar...@comcast.net wrote:
> >
> >>
> >> Has anyone tested this combination?
> >>
> >>
> >>
> >> Dave Marion
> >>
> >>
> >
>
Come to think of it, I've used only 1.0.3 or later on my dev box since
August.
Billie
On Fri, Nov 30, 2012 at 7:09 AM, Josh Elser wrote:
> I believe I'm running Accumulo-1.4.2 with Hadoop-1.0.4 (which is a very
> minor release above 1.0.3 if memory serves) at home.
>
>
I believe I'm running Accumulo-1.4.2 with Hadoop-1.0.4 (which is a very
minor release above 1.0.3 if memory serves) at home.
- Josh
On 11/30/12 9:56 AM, dlmar...@comcast.net wrote:
Has anyone tested this combination?
Dave Marion
Has anyone tested this combination?
Dave Marion
I wish, at this point it looks like no for me.
On Wed, Nov 21, 2012 at 8:48 AM, Billie Rinaldi wrote:
> Is anyone thinking about going to the Hadoop Summit in Amsterdam in March?
> http://hadoopsummit.org/amsterdam
> I'm thinking of proposing a talk on improvements in Accumulo 1.5.
>
> Billie
>
Is anyone thinking about going to the Hadoop Summit in Amsterdam in March?
http://hadoopsummit.org/amsterdam
I'm thinking of proposing a talk on improvements in Accumulo 1.5.
Billie
ich pulls the
> hadoop hdfs configuration from the java classpath
>
>
> Key: ACCUMULO-43
> URL: https://issues.apache.org/jira/
vant.
> Make Accumulo work with Hadoop 0.23
> ---
>
> Key: ACCUMULO-564
> URL: https://issues.apache.org/jira/browse/ACCUMULO-564
> Project: Accumulo
> Issue Type: Task
>
[
https://issues.apache.org/jira/browse/ACCUMULO-615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Keith Turner resolved ACCUMULO-615.
---
Resolution: Fixed
> accumulo runs fine on Hadoop
201 - 300 of 312 matches
Mail list logo