http://drucat80.perso.neuf.fr/wmcknhq.php
I have a higher an d gran der stan dar d of principle than George Washington.
He coul d not lie; I can, but I won't.Lottie Jure
WHat needs to happen for HADOOP-7823 to get committed for 1.1.0?
- Tim.
From: mfo...@hortonworks.com [mfo...@hortonworks.com] On Behalf Of Matt Foley
[ma...@apache.org]
Sent: Friday, April 20, 2012 6:02 PM
To: common-dev@hadoop.apache.org
Subject: Had
Any update on 1.1.0 schedule?
From: Matt Foley [mfo...@hortonworks.com]
Sent: Monday, March 12, 2012 4:16 PM
To: common-dev@hadoop.apache.org
Subject: [PLAN] Freeze date, Hadoop 1.0.2 RC
Hi all,
It's been a few weeks since 1.0.1 closed code, and I've recei
To: common-dev@hadoop.apache.org
Subject: Re: Compressor tweaks corresponding to HDFS-2834, 3051?
I am a +1 on opening a new JIRA for a first stab at reducing the amount of data
that gets copied around.
--Bobby Evans
On 3/7/12 1:26 AM, "Tim Broberg" wrote:
In https://issues.apach
Components: io
Reporter: Tim Broberg
Per Todd Lipcon's comment in HDFS-2834, "
Whenever a native decompression codec is being used, ... we generally have
the following copies:
1) Socket -> DirectByteBuffer (in SocketChannel implementation)
2) DirectByteBuffer
In https://issues.apache.org/jira/browse/HDFS-2834, Todd says, "
This is also useful whenever a native decompression codec is being used. In
those cases, we generally have the following copies:
1) Socket -> DirectByteBuffer (in SocketChannel implementation)
2) DirectByteBuffer -> byte[] (i
[
https://issues.apache.org/jira/browse/HADOOP-8003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tim Broberg resolved HADOOP-8003.
-
Resolution: Won't Fix
This is not fixable.
SplitCompressionInputStream is constrained to
Niels,
There are three options here:
1 - Add your codec, and alternative to the default gzip codec.
2 - Modify the gzip codec to incorporate your feature so that it is
pseudo-splittable by default (skippable?)
3 - Do nothing
The code uses the normal splittability interface and doesn't invent
-repo/git-clone instead
of a release tarball which may not work in the case of 1.0?
Its a WIP but I also cover some branch building at
http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk --
Perhaps it could come useful to you.
On Fri, Feb 3, 2012 at 5:18 AM, Tim Broberg wrote:
>
Any suggestions with getting started with eclipse debugging for 1.0.0?
I would like to be able to debug unit tests for my compression codec with some
reasonable debugger, be it eclipse, netbeans, or even jdb.
Harsh recommended Eclipse, so I'm wading in...
When I run "ant eclipse" I get missing
ed test jar and run it via jdb?
On Mon, Jan 30, 2012 at 7:17 AM, Tim Broberg wrote:
> I'd like to be able to step through unit tests with jdb to debug my classes.
>
> Is there a quick-and-easy way to rebuild with ant such that debug information
> is included?
>
> Thanks,
I'd like to be able to step through unit tests with jdb to debug my classes.
Is there a quick-and-easy way to rebuild with ant such that debug information
is included?
Thanks,
- Tim.
The information and any attached documents contained in this message
may be confidential and/or legally priv
Issue Type: New Feature
Components: io
Affects Versions: 1.0.0, 0.23.0, 0.22.0, 0.21.0
Reporter: Tim Broberg
To be splittable, a codec must extend SplittableCompressionCodec which has a
function returning a SplitCompressionInputStream.
SplitCompressionInputStream
What I was missing is that the codec sets the buffer size of the stream to
IO_COMPRESSION_CODEC_SNAPPY_BUFFERSIZE_KEY, so the buffer sizes match closely.
- Tim.
From: Tim Broberg
Sent: Thursday, January 26, 2012 12:56 PM
To: common-dev
I'm confused about the disparity of block sizes between BlockCompressorStream
and SnappyCompressor.
BlockCompressorStream has default MAX_INPUT_SIZE on the order of 512 bytes,
whereas SnappyCompressor has IO_COMPRESSION_CODEC_SNAPPY_BUFFERSIZE_DEFAULT of
256kB.
In BlockCompressorStream.write()
Hmm, on what year does the 13th next fall on a Friday?
- Tim.
On Dec 16, 2011, at 6:14 PM, "Konstantin Boudnik" wrote:
> On Fri, Dec 16, 2011 at 12:10PM, Matt Foley wrote:
>> Hello all,
>> I have posted a new release candidate for Hadoop 1.0.0 at
>>http://people.apache.org/~mattf/hadoop
I'm running into some head-scratchers in the are of compression configuration,
and I'm wondering if I can get a little input on why these are the way they are
and perhaps suggestions on how to handle this.
1 - In a patch related to https://issues.apache.org/jira/browse/HADOOP-5879,
strategy and
Type: New Feature
Components: io
Reporter: Tim Broberg
Priority: Minor
I propose to take the suggestion of PIG-42 extend it to
- add a more robust header such that false matches are vanishingly unlikely
- repeat initial bytes of the header for very fast split
er 04, 2011 10:51 PM
To: common-dev@hadoop.apache.org; Tim Broberg
Subject: Re: Compressor setInput input permanence
Hi Tim,
My guess is that this contract isn't explicitly documented anywhere.
But the good news is that the set of implementors and users of this
API is fairly well contained.
I
The question is, how long can a Compressor count on the user buffer to stick
around after a call to setInput()?
The Compressor object has a method, setInput whose inputs are an array
reference, an offset and a length.
I would expect that this input would no longer be guaranteed to persist aft
It sounds like CHANGES.txt has been helping to resolve issues for Matt, and
taking it away will open the floodgates.
I'm not familiar enough with the Jira procedures yet, but my usual expectation
is that there will be some review between a developer marking the resolution
"resolved" and (don't
3 which we just released. Fair
warning: 0.23 is very much 'alpha' quality currently.
thanks,
Arun
Sent from my iPhone
On Nov 14, 2011, at 9:19 PM, Tim Broberg wrote:
> I need a stable version of hadoop with splittable bzip2 compression which is
> an 0.21 feature.
>
> Is the
I need a stable version of hadoop with splittable bzip2 compression which is an
0.21 feature.
Is there a schedule for stable 0.21 release?
Failing that, what are my chances of getting splittable bzip2 incorporated into
an 0.20.20x.x release?
- Tim.
Matt, this one is biting me on the *** in 0.20.205, but I'm not seeing how to
set the target version, perhaps because it is fixed in 0.22.
https://issues.apache.org/jira/browse/HADOOP-6453
Seems like an easy low-risk fix that's been out there for a few years. What is
the best way to flag this f
. I asume you are doing
that.
Yes, Hadoop snappy JNI goes in the libhadoop OS
Thanks.
Alejandro
On Mon, Oct 31, 2011 at 2:56 PM, Tim Broberg wrote:
> Solved - In trunk, the snappy symbols are getting linked in with the rest
> of the native stuff in libhadoop.so:
>
> [tbroberg@san-
Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
So, this command line is sufficient:
mvn install -Pdist,native -DskipTests
Thanks again for answering, Alejandro.
- Tim.
From: Tim Broberg [tim.brob...@exar.com]
Sent: Monday, October 31, 2011 12:59 PM
To: common
oop.apache.org; Tim Broberg
Subject: Re: Example mvn cmd line to build snappy native lib?
Tim,
You have to download it snappy from source tarball, run './configure' and
then 'make install'
Thanks.
Alejandro
On Mon, Oct 31, 2011 at 11:24 AM, Tim Broberg wrote:
> bump
>
bump
Does anybody know how to build the snappy native library?
- Tim.
From: Tim Broberg
To: "common-dev@hadoop.apache.org"
Sent: Friday, October 28, 2011 11:52 PM
Subject: Example mvn cmd line to build snappy native lib?
I'm trying to
I'm trying to build the trunk from hadoop SVN including all the native
libraries.
The BUILDING.txt file has the following documentation on building the native
libraries:
"
Build options:
* Use -Pnative to compile/bundle native code
* Use -Dsnappy.prefix=(/usr/local) & -Dbundle.snappy=(fal
I'd like to add a core module to hadoop, but I'm running into some issues
getting started.
What I want is to be able to add a native library and codec to some stable
build of hadoop, build, debug, experiment, and benchmark.
Currently, I'm trying to rebuild the Cloudera rpms so I can get a com
After still more puttering, I gave up and just logged this in jira.
https://issues.apache.org/jira/browse/HADOOP-7726
- Tim.
From: Prasanth J
To: common-dev@hadoop.apache.org; Tim Broberg
Sent: Thursday, October 6, 2011 11:02 AM
Subject: Re: Hadoop on
-plugin
Affects Versions: 0.24.0
Environment: Fedora 15
Reporter: Tim Broberg
I'm new to hadoop, java, and eclipse, so please forgive me if I jumble multiple
issues together or mistake the symptoms of one problem for a separate issue.
Attempting to follow the
They don't make it obvious how to do that, do they?
Send to common-dev-subscr...@apache.org, then reply to the email that it sends
you.
Hope this helps,
- Tim.
http://apache.org/foundation/mailinglists.html
From: #NGUYEN HA DUY# [y080...@e.ntu.edu.s
I am trying to build hadoop so I can understand it and perhaps make some
moderate contributions.
Following the instructions here, http://wiki.apache.org/hadoop/HowToContribute,
I am running the following commands:
svn checkout http://svn.apache.org/repos/asf/hadoop/common/trunk/ hadoop-trunk
I am trying to build hadoop so I can understand it and perhaps make some
moderate contributions.
Following the instructions here, http://wiki.apache.org/hadoop/HowToContribute,
I am running the following commands:
svn checkout http://svn.apache.org/repos/asf/hadoop/common/trunk/ hadoop-trun
35 matches
Mail list logo