dw: (1)

2016-08-24 Thread Tim Broberg
http://drucat80.perso.neuf.fr/wmcknhq.php I have a higher an d gran der stan dar d of principle than George Washington. He coul d not lie; I can, but I won't.Lottie Jure

RE: Hadoop-1.0.3 and 1.1 release / code freeze proposals

2012-04-21 Thread Tim Broberg
WHat needs to happen for HADOOP-7823 to get committed for 1.1.0? - Tim. From: mfo...@hortonworks.com [mfo...@hortonworks.com] On Behalf Of Matt Foley [ma...@apache.org] Sent: Friday, April 20, 2012 6:02 PM To: common-dev@hadoop.apache.org Subject: Had

RE: [PLAN] Freeze date, Hadoop 1.0.2 RC

2012-03-12 Thread Tim Broberg
Any update on 1.1.0 schedule? From: Matt Foley [mfo...@hortonworks.com] Sent: Monday, March 12, 2012 4:16 PM To: common-dev@hadoop.apache.org Subject: [PLAN] Freeze date, Hadoop 1.0.2 RC Hi all, It's been a few weeks since 1.0.1 closed code, and I've recei

RE: Compressor tweaks corresponding to HDFS-2834, 3051?

2012-03-07 Thread Tim Broberg
To: common-dev@hadoop.apache.org Subject: Re: Compressor tweaks corresponding to HDFS-2834, 3051? I am a +1 on opening a new JIRA for a first stab at reducing the amount of data that gets copied around. --Bobby Evans On 3/7/12 1:26 AM, "Tim Broberg" wrote: In https://issues.apach

[jira] [Created] (HADOOP-8148) Zero-copy ByteBuffer-based compressor / decompressor API

2012-03-07 Thread Tim Broberg (Created) (JIRA)
Components: io Reporter: Tim Broberg Per Todd Lipcon's comment in HDFS-2834, " Whenever a native decompression codec is being used, ... we generally have the following copies: 1) Socket -> DirectByteBuffer (in SocketChannel implementation) 2) DirectByteBuffer

Compressor tweaks corresponding to HDFS-2834, 3051?

2012-03-06 Thread Tim Broberg
In https://issues.apache.org/jira/browse/HDFS-2834, Todd says, " This is also useful whenever a native decompression codec is being used. In those cases, we generally have the following copies: 1) Socket -> DirectByteBuffer (in SocketChannel implementation) 2) DirectByteBuffer -> byte[] (i

[jira] [Resolved] (HADOOP-8003) Make SplitCompressionInputStream an interface instead of an abstract class

2012-02-23 Thread Tim Broberg (Resolved) (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-8003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Broberg resolved HADOOP-8003. - Resolution: Won't Fix This is not fixable. SplitCompressionInputStream is constrained to

RE: Making Gzip splittable

2012-02-22 Thread Tim Broberg
Niels, There are three options here: 1 - Add your codec, and alternative to the default gzip codec. 2 - Modify the gzip codec to incorporate your feature so that it is pseudo-splittable by default (skippable?) 3 - Do nothing The code uses the normal splittability interface and doesn't invent

RE: Getting started with Eclipse for Hadoop 1.0.0?

2012-02-05 Thread Tim Broberg
-repo/git-clone instead of a release tarball which may not work in the case of 1.0? Its a WIP but I also cover some branch building at http://wiki.apache.org/hadoop/QwertyManiac/BuildingHadoopTrunk -- Perhaps it could come useful to you. On Fri, Feb 3, 2012 at 5:18 AM, Tim Broberg wrote: >

Getting started with Eclipse for Hadoop 1.0.0?

2012-02-02 Thread Tim Broberg
Any suggestions with getting started with eclipse debugging for 1.0.0? I would like to be able to debug unit tests for my compression codec with some reasonable debugger, be it eclipse, netbeans, or even jdb. Harsh recommended Eclipse, so I'm wading in... When I run "ant eclipse" I get missing

RE: Debugging 1.0.0 with jdb

2012-02-01 Thread Tim Broberg
ed test jar and run it via jdb? On Mon, Jan 30, 2012 at 7:17 AM, Tim Broberg wrote: > I'd like to be able to step through unit tests with jdb to debug my classes. > > Is there a quick-and-easy way to rebuild with ant such that debug information > is included? > > Thanks,

Debugging 1.0.0 with jdb

2012-01-29 Thread Tim Broberg
I'd like to be able to step through unit tests with jdb to debug my classes. Is there a quick-and-easy way to rebuild with ant such that debug information is included? Thanks, - Tim. The information and any attached documents contained in this message may be confidential and/or legally priv

[jira] [Created] (HADOOP-8003) Make SplitCompressionInputStream an interface instead of an abstract class

2012-01-28 Thread Tim Broberg (Created) (JIRA)
Issue Type: New Feature Components: io Affects Versions: 1.0.0, 0.23.0, 0.22.0, 0.21.0 Reporter: Tim Broberg To be splittable, a codec must extend SplittableCompressionCodec which has a function returning a SplitCompressionInputStream. SplitCompressionInputStream

RE: Snappy compression block sizes

2012-01-28 Thread Tim Broberg
What I was missing is that the codec sets the buffer size of the stream to IO_COMPRESSION_CODEC_SNAPPY_BUFFERSIZE_KEY, so the buffer sizes match closely. - Tim. From: Tim Broberg Sent: Thursday, January 26, 2012 12:56 PM To: common-dev

Snappy compression block sizes

2012-01-26 Thread Tim Broberg
I'm confused about the disparity of block sizes between BlockCompressorStream and SnappyCompressor. BlockCompressorStream has default MAX_INPUT_SIZE on the order of 512 bytes, whereas SnappyCompressor has IO_COMPRESSION_CODEC_SNAPPY_BUFFERSIZE_DEFAULT of 256kB. In BlockCompressorStream.write()

Re: [VOTE] Hadoop-1.0.0 release candidate 3

2011-12-16 Thread Tim Broberg
Hmm, on what year does the 13th next fall on a Friday? - Tim. On Dec 16, 2011, at 6:14 PM, "Konstantin Boudnik" wrote: > On Fri, Dec 16, 2011 at 12:10PM, Matt Foley wrote: >> Hello all, >> I have posted a new release candidate for Hadoop 1.0.0 at >>http://people.apache.org/~mattf/hadoop

Compression configuration peculiarities

2011-12-13 Thread Tim Broberg
I'm running into some head-scratchers in the are of compression configuration, and I'm wondering if I can get a little input on why these are the way they are and perhaps suggestions on how to handle this. 1 - In a patch related to https://issues.apache.org/jira/browse/HADOOP-5879, strategy and

[jira] [Created] (HADOOP-7909) Implement Splittable Gzip based on a signature in a gzip header field

2011-12-10 Thread Tim Broberg (Created) (JIRA)
Type: New Feature Components: io Reporter: Tim Broberg Priority: Minor I propose to take the suggestion of PIG-42 extend it to - add a more robust header such that false matches are vanishingly unlikely - repeat initial bytes of the header for very fast split

RE: Compressor setInput input permanence

2011-12-05 Thread Tim Broberg
er 04, 2011 10:51 PM To: common-dev@hadoop.apache.org; Tim Broberg Subject: Re: Compressor setInput input permanence Hi Tim, My guess is that this contract isn't explicitly documented anywhere. But the good news is that the set of implementors and users of this API is fairly well contained. I

Compressor setInput input permanence

2011-12-03 Thread Tim Broberg
The question is, how long can a Compressor count on the user buffer to stick around after a call to setInput()?   The Compressor object has a method, setInput whose inputs are an array reference, an offset and a length.   I would expect that this input would no longer be guaranteed to persist aft

RE: Dropping CHANGES.txt?

2011-11-21 Thread Tim Broberg
It sounds like CHANGES.txt has been helping to resolve issues for Matt, and taking it away will open the floodgates. I'm not familiar enough with the Jira procedures yet, but my usual expectation is that there will be some review between a developer marking the resolution "resolved" and (don't

RE: 0.21 stable schedule?

2011-11-14 Thread Tim Broberg
3 which we just released. Fair warning: 0.23 is very much 'alpha' quality currently. thanks, Arun Sent from my iPhone On Nov 14, 2011, at 9:19 PM, Tim Broberg wrote: > I need a stable version of hadoop with splittable bzip2 compression which is > an 0.21 feature. > > Is the

0.21 stable schedule?

2011-11-14 Thread Tim Broberg
I need a stable version of hadoop with splittable bzip2 compression which is an 0.21 feature.   Is there a schedule for stable 0.21 release?   Failing that, what are my chances of getting splittable bzip2 incorporated into an 0.20.20x.x release?       - Tim.

RE: [ANNOUNCE] Intend to build a 0.20.205.1 candidate next Friday 11 Nov.

2011-11-04 Thread Tim Broberg
Matt, this one is biting me on the *** in 0.20.205, but I'm not seeing how to set the target version, perhaps because it is fixed in 0.22. https://issues.apache.org/jira/browse/HADOOP-6453 Seems like an easy low-risk fix that's been out there for a few years. What is the best way to flag this f

RE: Example mvn cmd line to build snappy native lib?

2011-10-31 Thread Tim Broberg
. I asume you are doing that. Yes, Hadoop snappy JNI goes in the libhadoop OS Thanks. Alejandro On Mon, Oct 31, 2011 at 2:56 PM, Tim Broberg wrote: > Solved - In trunk, the snappy symbols are getting linked in with the rest > of the native stuff in libhadoop.so: > > [tbroberg@san-

RE: Example mvn cmd line to build snappy native lib?

2011-10-31 Thread Tim Broberg
Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs So, this command line is sufficient: mvn install -Pdist,native -DskipTests Thanks again for answering, Alejandro. - Tim. From: Tim Broberg [tim.brob...@exar.com] Sent: Monday, October 31, 2011 12:59 PM To: common

RE: Example mvn cmd line to build snappy native lib?

2011-10-31 Thread Tim Broberg
oop.apache.org; Tim Broberg Subject: Re: Example mvn cmd line to build snappy native lib? Tim, You have to download it snappy from source tarball, run './configure' and then 'make install' Thanks. Alejandro On Mon, Oct 31, 2011 at 11:24 AM, Tim Broberg wrote: > bump >

Re: Example mvn cmd line to build snappy native lib?

2011-10-31 Thread Tim Broberg
bump Does anybody know how to build the snappy native library?     - Tim. From: Tim Broberg To: "common-dev@hadoop.apache.org" Sent: Friday, October 28, 2011 11:52 PM Subject: Example mvn cmd line to build snappy native lib? I'm trying to

Example mvn cmd line to build snappy native lib?

2011-10-28 Thread Tim Broberg
I'm trying to build the trunk from hadoop SVN including all the native libraries. The BUILDING.txt file has the following documentation on building the native libraries: "  Build options:   * Use -Pnative to compile/bundle native code   * Use -Dsnappy.prefix=(/usr/local) & -Dbundle.snappy=(fal

Development basis / rebuilding Cloudera dist

2011-10-21 Thread Tim Broberg
I'd like to add a core module to hadoop, but I'm running into some issues getting started.   What I want is to be able to add a native library and codec to some stable build of hadoop, build, debug, experiment, and benchmark.   Currently, I'm trying to rebuild the Cloudera rpms so I can get a com

Re: Hadoop on Eclipse

2011-10-06 Thread Tim Broberg
After still more puttering, I gave up and just logged this in jira. https://issues.apache.org/jira/browse/HADOOP-7726     - Tim. From: Prasanth J To: common-dev@hadoop.apache.org; Tim Broberg Sent: Thursday, October 6, 2011 11:02 AM Subject: Re: Hadoop on

[jira] [Created] (HADOOP-7726) eclipse plugin does not build with 0.20.205

2011-10-06 Thread Tim Broberg (Created) (JIRA)
-plugin Affects Versions: 0.24.0 Environment: Fedora 15 Reporter: Tim Broberg I'm new to hadoop, java, and eclipse, so please forgive me if I jumble multiple issues together or mistake the symptoms of one problem for a separate issue. Attempting to follow the

RE: subscribe to the Hadoop developer

2011-10-02 Thread Tim Broberg
They don't make it obvious how to do that, do they? Send to common-dev-subscr...@apache.org, then reply to the email that it sends you. Hope this helps, - Tim. http://apache.org/foundation/mailinglists.html From: #NGUYEN HA DUY# [y080...@e.ntu.edu.s

Artifact missing - org.apache.hadoop:hadoop-project:pom:0.24.0-SNAPSHOT

2011-10-02 Thread Tim Broberg
I am trying to build hadoop so I can understand it and perhaps make some moderate contributions.   Following the instructions here, http://wiki.apache.org/hadoop/HowToContribute, I am running the following commands:    svn checkout http://svn.apache.org/repos/asf/hadoop/common/trunk/ hadoop-trunk

Artifact missing - org.apache.hadoop:hadoop-project:pom:0.24.0-SNAPSHOT

2011-10-02 Thread Tim Broberg
I am trying to build hadoop so I can understand it and perhaps make some moderate contributions. Following the instructions here, http://wiki.apache.org/hadoop/HowToContribute, I am running the following commands:  svn checkout http://svn.apache.org/repos/asf/hadoop/common/trunk/ hadoop-trun