I guess what I'm asking is there a way to set "infinite" or no max bounds on versions (e.g. setMaxVersion(-1) possibly)? Or do I have to call setMaxVersion(Integer.MAX_VALUE) or setMaxVersion(<some large guess>)? If a large guess is the way to go, what sort of overhead costs might we need to consider when finding the right balance point between room to grow and the maintenance support cost of needing to expand later?
We plan on building MapReduce jobs to clean up versions based on some conditions so the value shouldn't get that large but the conditions for cleaning up those versions might be decided by other consumers of the service. So having room to grow is ideal. On Tue, Oct 4, 2011 at 11:36 AM, Doug Meil <[email protected]> wrote: > > Hi there- > > re: "i don't care store them all" > > > What do you mean? > > > > On 10/4/11 12:20 PM, "Micah Whitacre" <[email protected]> wrote: > >>In reading the documentation all I've seen suggestions on how to set >>the value and the default value. However I haven't seen any >>indication how to set the value to "i don't care store them all" or if >>there is a maximum bounds aside from Integer.MAX_VALUE. Does anyone >>know? >> >>Thanks, >>Micah >> >>[1] - >>http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor. >>html#setMaxVersions(int) > >
