[
https://issues.apache.org/jira/browse/CASSANDRA-47?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13062678#comment-13062678
]
Stu Hood commented on CASSANDRA-47:
-----------------------------------
bq. example ./bin/stress -S 1024 -n 1000000 -C 250 -V [...] I see 3.8GB
compressed into 781MB in my tests.
Why would the uncompressed size for -C 250 be different from the uncompressed
size for -C 50: 3.8 GB vs the 1.7 GB from before?
On a side note: a cardinality of 250 makes for less variance in the average
size of the random generated values, but I still see relatively large size
differences between consecutive runs. It might be worth opening a separate
ticket for stress.java to make the generated random values a fixed size.
----
With the understanding that there is a lot of variance in the results, here are
some preliminary results for the _bin/stress -S 1024 -n 1000000 -C 250 -V_
workload:
|| build || disk volume (bytes) || write runtime (s) || read runtime (s) ||
read ops/s ||
| trunk | 4,015,004,187 | 372 | >> 1000s | ~216 |
| #674 + #2319 | 594,796,624 | 273 | 255 | 3845 |
| #47* | | | | |
\* need to figure out the problem I was having above
> SSTable compression
> -------------------
>
> Key: CASSANDRA-47
> URL: https://issues.apache.org/jira/browse/CASSANDRA-47
> Project: Cassandra
> Issue Type: New Feature
> Components: Core
> Reporter: Jonathan Ellis
> Assignee: Pavel Yaskevich
> Labels: compression
> Fix For: 1.0
>
> Attachments: CASSANDRA-47.patch, snappy-java-1.0.3-rc4.jar
>
>
> We should be able to do SSTable compression which would trade CPU for I/O
> (almost always a good trade).
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira