[ 
https://issues.apache.org/jira/browse/ACCUMULO-813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Turner updated ACCUMULO-813:
----------------------------------

         Priority: Major  (was: Blocker)
    Fix Version/s:     (was: 1.5.0)
                   1.6.0

Confirmed w/ Eric this was caused by bad col vis.  Clearing the block caches on 
IOException is still a good idea.  I lowered the priority though since the 
reasons for opening this bug are mitigated in 1.5.
                
> clear block caches on IOException
> ---------------------------------
>
>                 Key: ACCUMULO-813
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-813
>             Project: Accumulo
>          Issue Type: Improvement
>          Components: tserver
>            Reporter: Eric Newton
>            Assignee: Keith Turner
>             Fix For: 1.6.0
>
>
> A user generated a bulk import file with illegal data.  After re-generating 
> the file, they thought they could just move the file into HDFS with the new 
> name.  Unfortunately, the block cache remembered some of the data, which 
> caused the data at the block boundaries to be corrupt.
> One possible solution is to clear the block cache when an IOException occurs 
> on a read.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to