GitHub user shahidki31 reopened a pull request:

    https://github.com/apache/carbondata/pull/1725

    [CARBONDATA-1941] Documentation added for Lock Retry

    The properties, default value, and description added for the lock retry.
    


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/shahidki31/carbondata master

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/carbondata/pull/1725.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1725
    
----
commit 38038add7065a05957f84d884298c901253f3d4d
Author: mohammadshahidkhan <mohdshahidkhan1987@...>
Date:   2017-12-28T12:42:37Z

    [CARBONDATA-1955] Delta DataType calculation is incorrect for long type
    
    Problem:
    In case of Long type, the delta data type is always choosing the Long type.
    But it should choose the datatype based on diff (max-min) of max and min 
values.
    Solution:
    Corrected to choose the delta data type based on max and min values.
    
    This closes #1744

commit 1f54c47282bc201f2071bc8c9cc1be19baf0c9a1
Author: manishgupta88 <tomanishgupta18@...>
Date:   2017-12-27T17:39:58Z

    [CARBONDATA-1946] Exception thrown after alter data type change operation 
on dictionary exclude integer type column
    
    Problem: After restructure change data type operation (INT to BIGINT) on 
dictionary exclude INT type column if select query is triggered then exception 
is thrown.
    
    Analysis: This is happening because while retrieving the data the vector is 
created for BIGINT type (size 8 bytes) which but the actual length of each data 
is 4 bytes and there is length check while reading the data which is failing.
    
    Solution: Added a new restructuredType variable in vector and assigned the 
block dimension data type to it.
    
    This closes #1732

commit e40b34b08210981cbe3bffe73cacff2577fbd8cb
Author: mohammadshahidkhan <mohdshahidkhan1987@...>
Date:   2017-06-28T13:52:45Z

    [CARBONDATA-1249] Wrong order of columns in redirected csv for bad records
    
    Problem:
    Wrong order of columns in redirected csv for bad records
    The RowParser rearrage the csv raw data based on the inputMapping & 
outputMapping.
    So the converter step does not have actual raw csv record to log or 
redirect the bad record details.
    
    Steps to repprodcue:
    
    Create employee(Name string, age int, project string) stored by 'carbondata'
    
    LOAD DATA LOCAL INPATH '' INTO table employee 
options('BAD_RECORDS_ACTION'='REDIRECT')
    Data:
    
    Name,age,Project
    Sam,27,Carbon
    Ruhi,23x,Hadoop
    
    The second record is bad record so it will be writtern to the csv file at 
the bad record loation.
    
    Expected:
    
    Ruhi,23x,Hadoop
    
    This closes #1116

commit 6f10c4127914c94fdd6f7af16512e0758f2bb146
Author: ravipesala <ravi.pesala@...>
Date:   2017-12-27T09:15:36Z

    [CARBONDATA-1936][PARTITION] Corrected bad record and avoid double 
conversion of data in Partitioning table
    
    Currently, one time data conversion happens while loading data while 
creating RDD to make sure the partitions are added with the right format. But 
this approach creates an issue in case of bad record handling as the writing of 
bad records not possible from RDD.
    In this PR we don't convert the data in RDD but convert the data while 
adding the partition information to hive.
    
    This closes #1729

commit 7a4bd2298557ff76ee29d6c8efa37c604b548a07
Author: Manohar <manohar.crazy09@...>
Date:   2018-01-04T09:09:25Z

    [CARBONDATA-1912] Handling lock issues for alter rename operation
    
    Modified code to release all old table acquired locks in case of any 
exception. So release all old table locks only in case of any exception instead 
of unlocking in finally block
    
    This closes #1734

commit bcf3ca3feda544dcbc1b5c096b98369c8a27f4e3
Author: mohammadshahidkhan <mohdshahidkhan1987@...>
Date:   2017-12-22T11:22:44Z

    [CARBONDATA-1929][Validation]carbon property configuration validation + 
Fixed test case
    
    Added validation for below parameter:
    carbon.timestamp.format
    carbon.date.format
    carbon.sort.file.write.buffer.size (minValue = 10 KB, maxValue=10MB, 
defaultValue =16 KB )
    carbon.sort.intermediate.files.limit (minValue = 2, maxValue=50, 
defaultValue =20 )
    
    This closes #1718

commit 837fdd2cb91780c9193efb5b37cd107c9fa36591
Author: mohammadshahidkhan <mohdshahidkhan1987@...>
Date:   2017-06-23T06:56:25Z

    [CARBONDATA-1218] In case of data-load failure the 
BadRecordsLogger.badRecordEntry map holding the task Status is not removing the 
task Entry
    
    Problem
    For GLOBAL_SORT scope option in case of data-load failure the 
BadRecordsLogger.badRecordEntry map holding the task Status is not removing the 
task Entry.
    Because of this the next load is getting failed even though the data being 
loaded has no bad records.
    
    Solution
    The map entry must be removed after load completion either success or fail.
    Refactored the Bad record logger.
    
    This closes #1082

commit c9e58429af5f78e4001d90d6e3c90a0d48ed2bcf
Author: Jacky Li <jacky.likun@...>
Date:   2018-01-03T16:03:39Z

    [CARBONDATA-1983] Remove unnecessary WriterVo object
    
    This closes #1761

commit 08b8af7b044fa761caf58da2875935d40d4ca13b
Author: sounakr <sounakr@...>
Date:   2018-01-04T09:45:42Z

    [CARBONDATA-1984][Compression Codec] Double Compression Codec Rectification.
    
    This closes #1763

commit bbb4333f1793dae28de7a6c10cf3342204fc87b9
Author: shahidki31 <shahidki31@...>
Date:   2017-12-27T10:47:19Z

    Documentation added for Lock Retry
    
    The properties, default value, and description added for the lock retry.

----


---

Reply via email to