Thanks for the answer. I'm bit surprised about this. Sure I understand that 
there's some upper limit but 300 seems pretty low value. Support for wider 
tables if definitively required in analytical use cases. Hopefully someone 
starts working on this.

-jan

On 18 Jun 2017, at 23.22, Todd Lipcon 
<[email protected]<mailto:[email protected]>> wrote:

Hi Jan,

I don't believe anyone is currently working on expanding the limitation.

If you are willing to live on the edge, it is possible to use 
--unlock-unsafe-flags and bump the limit to a higher number. However, you may 
run into performance or stability issues, since you are entering a realm of 
usage that has no testing or real world usage, so if you do hit an issue, 
you'll probably be on your own to diagnose and debug the problem.

There are a few things in flight (eg Hao is working on reducing the number of 
fsyncs when writing blocks) which may help the issue, but until we've expanded 
testing coverage, we don't want to recommend users operate past these limits.

-Todd




On Sun, Jun 18, 2017 at 12:35 PM, Jan Holmberg 
<[email protected]<mailto:[email protected]>> wrote:
Anyone? I find 300 columns pretty strict limitation for analytical tables. I'd 
liked to know if wider tables are on the roadmap.

-jan

> On 16 Jun 2017, at 21.56, Jan Holmberg 
> <[email protected]<mailto:[email protected]>> wrote:
>
> Hi,
> I ran into Kudu limitation of max columns (300). Same limit seemed to apply 
> latest Kudu version as well but not ex. Impala/Hive (in the same extent at 
> least).
> * is this limitation going to be loosened in near future?
> * any suggestions how to get over this limitation? Table splitting is the 
> obvious one but in my case not the desired solution.
>
> cheers,
> -jan




--
Todd Lipcon
Software Engineer, Cloudera

Reply via email to