Good question.
We could also consider to change the test to run uncompressed to
maintain some test coverage for the uncompressed case.
Thoughts?
Cheers,
Till
On 11 Oct 2019, at 9:48, Wail Alkowaileet wrote:
Should we remove [1] and [2]? [1] is now similar to [3] except for the
buffer cache size.
[1]
https://github.com/apache/asterixdb/blob/master/asterixdb/asterix-app/src/test/resources/cc-compression.conf
[2]
https://github.com/apache/asterixdb/blob/5aeba9b475fc714edf953bd88ada7281a2d4937e/asterixdb/asterix-app/src/test/java/org/apache/asterix/test/runtime/SqlppExecutionWithCompresisionTest.java
[3]
https://github.com/apache/asterixdb/blob/master/asterixdb/asterix-app/src/test/resources/cc.conf
On Fri, Oct 11, 2019 at 5:51 AM Michael Blow <[email protected]>
wrote:
All,
The default storage block level compression strategy is changing from
'none' to 'snappy'.
Existing datasets will not be affected, but if you want to prevent
new
datasets from being compressed, either specify the compression should
be
none at dataset creation time:
*create dataset DBLP1(DBLPType)primary key idwith
{"storage-block-compression": {"scheme": "none"}};*
...or override the default in the config file to be none:
*storage.compression.block = none*
I expect this to be merged into master today.
Thanks,
-MDB
--
*Regards,*
Wail Alkowaileet