GitHub user anoopsharma00 opened a pull request:
https://github.com/apache/incubator-trafodion/pull/1009
Various fixes, details below
-- max length limited to 16777216 bytes (16M)
for char cols and functions (repeat, concat).
(optimizer/SynthType.cpp, common/ComSmallDefs.h, sqlcomp/nadefaults.cpp)
-- previous max length change requires adding of a new hbase property
called hbase.client.keyvalue.maxsize so large key/values could be
handled in hbase cell.
Following scripts have been updated to handle that.
(sqf/sql/scripts/install_local_hadoop,
install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/
service_advisor.py
install/installer/traf_hortonworks_mods
install/python-installer/configs/mod_cfgs.json)
Developers can also update hbase-site.xml with this property if
they dont want to reinstall local hadoop.
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>0</value>
</property>
-- while accessing a hive table as an external table, the hive table and
corresponding external table definitions are validated to be the
same. This validates that corresponding columns have the same
data attributes (type, length, scale, etc).
This check causes failures if hive column is of 'string' datatype.
That is because hive 'string' column length can be changed by
a cqd but the corresponding external table has predefined length
set when the table is created.
The validation check now ignores the length attribute if the hive column
is
of 'string' datatype.
(optimizer/BindRelExpr.cpp, common/CharType.*, NAType.*)
-- data moved into direct buffer would sometimes cause overflow and crash.
The max direct buffer length used to send/retrieve hbase data
is now limited to 1G (executor/ExHbaseAccess.cpp)
-- errors during vsbb upsert are now handled correctly
(executor/ExHbaseIUD.cpp)
-- support for GET CATALOGS command
(generator/GenRelExeUtil.cpp, executor/ExExeUtilGet.cpp)
-- An incorrect computation would sometimes cause group by rollup to
crash compiler in NAHeap::unlinkLargeFragment()
(generator/GenRelGrby.cpp)
-- sort of a large row that exceeded sort pre-set buffer size would crash
if the row size exceeded the max buffer size.
That has been fixed by allocating space for atleast one row.
(generator/GenRelMisc.cpp)
-- purgedata on a delimited name was failing. That has been fixed.
(optimizer/RelExeUtil.h)
-- regress/tools/runregr_privs1/privs2 fixed to handle running of
subset of tests
-- regress/seabase/TEST031 updated with new tests
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/anoopsharma00/incubator-trafodion
ansharma_trafr21_br
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/incubator-trafodion/pull/1009.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #1009
----
commit 8aa532330e4dbf316a09817a033292c85cb02c4e
Author: Anoop Sharma <[email protected]>
Date: 2017-03-14T20:16:08Z
Various fixes, details below
-- max length limited to 16777216 bytes (16M)
for char cols and functions (repeat, concat).
(optimizer/SynthType.cpp, common/ComSmallDefs.h, sqlcomp/nadefaults.cpp)
-- previous max length change requires adding of a new hbase property
called hbase.client.keyvalue.maxsize so large key/values could be
handled in hbase cell.
Following scripts have been updated to handle that.
(sqf/sql/scripts/install_local_hadoop,
install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/
service_advisor.py
install/installer/traf_hortonworks_mods
install/python-installer/configs/mod_cfgs.json)
Developers can also update hbase-site.xml with this property if
they dont want to reinstall local hadoop.
<property>
<name>hbase.client.keyvalue.maxsize</name>
<value>0</value>
</property>
-- while accessing a hive table as an external table, the hive table and
corresponding external table definitions are validated to be the
same. This validates that corresponding columns have the same
data attributes (type, length, scale, etc).
This check causes failures if hive column is of 'string' datatype.
That is because hive 'string' column length can be changed by
a cqd but the corresponding external table has predefined length
set when the table is created.
The validation check now ignores the length attribute if the hive column
is
of 'string' datatype.
(optimizer/BindRelExpr.cpp, common/CharType.*, NAType.*)
-- data moved into direct buffer would sometimes cause overflow and crash.
The max direct buffer length used to send/retrieve hbase data
is now limited to 1G (executor/ExHbaseAccess.cpp)
-- errors during vsbb upsert are now handled correctly
(executor/ExHbaseIUD.cpp)
-- support for GET CATALOGS command
(generator/GenRelExeUtil.cpp, executor/ExExeUtilGet.cpp)
-- An incorrect computation would sometimes cause group by rollup to
crash compiler in NAHeap::unlinkLargeFragment()
(generator/GenRelGrby.cpp)
-- sort of a large row that exceeded sort pre-set buffer size would crash
if the row size exceeded the max buffer size.
That has been fixed by allocating space for atleast one row.
(generator/GenRelMisc.cpp)
-- purgedata on a delimited name was failing. That has been fixed.
(optimizer/RelExeUtil.h)
-- regress/tools/runregr_privs1/privs2 fixed to handle running of
subset of tests
-- regress/seabase/TEST031 updated with new tests
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---