Hi Ravi
Yes, so we need provide a table level property for blocklet size to
configure while creating a table. can you please create one JIRA for this ?
How about liking this:
CREATE TABLE IF NOT EXISTS (column_name,column_type) STORED BY
'carbondata' TBLPROPERTIES('TABLE_BLOCKLETSIZE'='128')
Hi ravipesala:
Ok, I will raise jira for this and try to implement this.
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Hi Liang,
Now the TABLE_BLOCKSIZE is only limited to the size of carbondata file. It
is not considered for allocating tasks. So it does not matter the size
of TABLE_BLOCKSIZE.
But yes we can consider it as 512M.
We can also change the default of blocklet
(carbon.blockletgroup.size.in.mb) size to
Hi,
Yes, it is a good suggestion we can plan to set the number of loading cores
dynamically as per the available executor cores. Can you please raise
jira for it.
Regards,
Ravindra
On 25 October 2017 at 12:08, xm_zzc <441586...@qq.com> wrote:
> Hi:
> If we are using carbondata + spark to
Hi:
If we are using carbondata + spark to load data, we can set
carbon.number.of.cores.while.loading to the number of executor cores.
When set the number of executor cores to 6, it shows that there are at
least 6 cores per node for loading data, so we can set
Hi All
As you know, some default value of parameters need to adjust for most of
cases, this discussion is for collecting which parameters' default value
need to be optimized:
1. TABLE_BLOCKSIZE:
current default is 1G, propose to adjust to 512M
2.
Please append at here if you propose to adjust