Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "HadoopSupport" page has been changed by jeremyhanna:
https://wiki.apache.org/cassandra/HadoopSupport?action=diff&rev1=59&rev2=60

Comment:
noting new consistency level default

  }}}
  The settings normally default to 4 each, but some find that too conservative. 
 If you set it too low, you might have blacklisted tasktrackers and failed jobs 
because of occasional timeout exceptions.  If you set them too high, jobs that 
would otherwise fail quickly take a long time to fail, sacrificing efficiency.  
Keep in mind that this can just cover a problem.  It may be that you always 
want these settings to be higher when operating against Cassandra.  However, if 
you run into these exceptions too frequently, there may be a problem with your 
Cassandra or Hadoop configuration.
  
+ If you are seeing inconsistent data coming back, consider the consistency 
level at which you read ('''cassandra.consistencylevel.read''') and write 
('''cassandra.consistencylevel.write''').  Both properties default to 
!ConsistencyLevel.LOCAL_ONE (Previously 
[[https://issues.apache.org/jira/browse/CASSANDRA-6214|ONE]]).
- If you are seeing inconsistent data coming back, consider the consistency 
level that you are reading and writing at.  The two relevant properties are:
- 
-  * '''cassandra.consistencylevel.read''' - defaults to !ConsistencyLevel.ONE.
-  * '''cassandra.consistencylevel.write''' - defaults to !ConsistencyLevel.ONE.
  
  Also hadoop integration uses range scans underneath which do not do read 
repair.  However reading at !ConsistencyLevel.QUORUM will reconcile differences 
among nodes read.  See ReadRepair section as well as the !ConsistencyLevel 
section of the [[http://wiki.apache.org/cassandra/API|API]] page for more 
details.
  

Reply via email to