[ 
https://issues.apache.org/jira/browse/CASSANDRA-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909395#comment-13909395
 ] 

Benedict commented on CASSANDRA-6736:
-------------------------------------

That should be better commented, but I didn't worry about it too much last time 
I massaged this code, as it's all gone in 2.1. However that isn't a race 
condition for two reasons:

1) the discardCompletedSegments() is called in the same thread as CLS.write(), 
so they're definitely not racing.
2) if you look closely segment.isUnused() is only called if iter.hasNext(), 
i.e. if it isn't the currently active (i.e. being modified) segment. Dirty 
marking happens before any actual modification of the CLS that would result in 
it moving from being the active segment. So even if they were happening on two 
different threads, it would be impossible for CLS.write() to be modifying it 
whilst isUnused() is being called

> Windows7 AccessDeniedException on commit log 
> ---------------------------------------------
>
>                 Key: CASSANDRA-6736
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6736
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: Windows 7, quad core, 8GB RAM, single Cassandra node, 
> Cassandra 2.0.5 with leakdetect patch from CASSANDRA-6283
>            Reporter: Bill Mitchell
>            Assignee: Joshua McKenzie
>         Attachments: 2014-02-18-22-16.log
>
>
> Similar to the data file deletion of CASSANDRA-6283, under heavy load with 
> logged batches, I am seeing a problem where the Commit log cannot be deleted:
>  ERROR [COMMIT-LOG-ALLOCATOR] 2014-02-18 22:15:58,252 CassandraDaemon.java 
> (line 192) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
>  FSWriteError in C:\Program Files\DataStax 
> Community\data\commitlog\CommitLog-3-1392761510706.log
>       at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:120)
>       at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:150)
>       at 
> org.apache.cassandra.db.commitlog.CommitLogAllocator$4.run(CommitLogAllocator.java:217)
>       at 
> org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
>       at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>       at java.lang.Thread.run(Unknown Source)
> Caused by: java.nio.file.AccessDeniedException: C:\Program Files\DataStax 
> Community\data\commitlog\CommitLog-3-1392761510706.log
>       at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>       at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>       at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>       at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
>       at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source)
>       at java.nio.file.Files.delete(Unknown Source)
>       at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:116)
>       ... 5 more
> (Attached in 2014-02-18-22-16.log is a larger excerpt from the cassandra.log.)
> In this particular case, I was trying to do 100 million inserts into two 
> tables in parallel, one with a single wide row and one with narrow rows, and 
> the error appeared after inserting 43,151,232 rows.  So it does take a while 
> to trip over this timing issue.  
> It may be aggravated by the size of the batches. This test was writing 10,000 
> rows to each table in a batch.  
> When I try switching the same test from using a logged batch to an unlogged 
> batch, and no such failure appears. So the issue could be related to the use 
> of large, logged batches, or it could be that unlogged batches just change 
> the probability of failure.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to