[
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13948359#comment-13948359
]
Joshua McKenzie commented on CASSANDRA-4050:
--------------------------------------------
Attaching 1st run at converting to nio.2. Test results on both Windows and
linux at blends of 3/1, 50/1, 100/1 write/read ratios and the inverse look to
be within margin of error, though we're not getting any huge gains out of this
change. 3/1 sample:
{code:title=3/1 w/r test numbers}
4050 mmap 3/1 r/w:
id, ops , op/s,adj op/s, key/s, mean,
med, .95, .99, .999, max, time, stderr
4 threadCount, 934400 , 31029, 31030, 31029, 0.1,
0.1, 0.2, 0.2, 0.7, 48.3, 30.1, 0.01072
8 threadCount, 1259400 , 41576, 41607, 41576, 0.2,
0.2, 0.2, 0.4, 1.1, 37.9, 30.3, 0.01139
16 threadCount, 1478350 , 48565, 48592, 48565, 0.3,
0.3, 0.5, 1.0, 7.0, 73.6, 30.4, 0.01197
24 threadCount, 1523350 , 49177, -0, 49177, 0.5,
0.4, 0.7, 1.5, 19.1, 71.8, 31.0, 0.01668
36 threadCount, 1518900 , 48679, 48718, 48679, 0.7,
0.6, 1.1, 2.3, 22.6, 92.7, 31.2, 0.01425
54 threadCount, 1541050 , 48020, 48113, 48020, 1.1,
0.9, 1.8, 4.1, 28.6, 212.6, 32.1, 0.03217
trunk mmap 3/1 r/w:
id, ops , op/s,adj op/s, key/s, mean,
med, .95, .99, .999, max, time, stderr
4 threadCount, 926400 , 30764, 30765, 30764, 0.1,
0.1, 0.2, 0.2, 0.7, 24.3, 30.1, 0.00997
8 threadCount, 1283250 , 42495, -0, 42495, 0.2,
0.2, 0.2, 0.3, 0.9, 44.4, 30.2, 0.01254
16 threadCount, 1478250 , 48509, -0, 48509, 0.3,
0.3, 0.5, 0.9, 4.1, 68.0, 30.5, 0.00912
24 threadCount, 1507900 , 48553, 48594, 48553, 0.5,
0.4, 0.8, 1.7, 21.2, 132.1, 31.1, 0.01290
36 threadCount, 1515150 , 48079, -0, 48079, 0.7,
0.6, 1.2, 2.7, 23.3, 103.8, 31.5, 0.01531
54 threadCount, 1517600 , 47826, -0, 47826, 1.1,
0.9, 1.6, 3.2, 25.0, 194.4, 31.7, 0.01819
{code}
I mention mmap in these results as using BufferedPoolingSegmentedFiles on both
trunk and on this patch had a noticeable negative impact on throughput, more on
nio.2 than on the byte[] raw usage. On trunk with read-heavy workloads I'm
seeing anywhere from a 30-40% hit in stress results on read performance. 1/50
r/w ratio stress w/BufferedPoolingSegmentedFiles was still 16% slower than my
testing using MmappedSegmentedFiles. I'll be attaching a sample of the perf
#'s I've been getting to CASSANDRA-6890.
I put some yammer timers inside the RAR code on both trunk and on this branch
and it looks like #'s are comparable up to about the 60th percentile or so
across all major read or rebuffer operations - then they balloon. In the
territory of a max timestamp of 100+ms on a simple channel seek vs. .01 on
mmap'ed. GC count during stress is roughly double at a glance - I'll look into
that further on 6890 but heap stress due to more activity on the heap is to be
expected.
As noted earlier - in order to fully resolve this issue, either CASSANDRA-6890
will need to be resolved or some alternative solution for Windows if we keep
mmap'ing in.
> Unable to remove snapshot files on Windows while original sstables are live
> ---------------------------------------------------------------------------
>
> Key: CASSANDRA-4050
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
> Project: Cassandra
> Issue Type: Bug
> Environment: Windows 7
> Reporter: Jim Newsham
> Assignee: Joshua McKenzie
> Priority: Minor
> Attachments: CASSANDRA-4050_v1.patch
>
>
> I'm using Cassandra 1.0.8, on Windows 7. When I take a snapshot of the
> database, I find that I am unable to delete the snapshot directory (i.e., dir
> named "{datadir}\{keyspacename}\snapshots\{snapshottag}") while Cassandra is
> running: "The action can't be completed because the folder or a file in it
> is open in another program. Close the folder or file and try again" [in
> Windows Explorer]. If I terminate Cassandra, then I can delete the directory
> with no problem.
> I expect to be able to move or delete the snapshotted files while Cassandra
> is running, as this should not affect the runtime operation of Cassandra.
--
This message was sent by Atlassian JIRA
(v6.2#6252)