Great, glad to hear it. And yes, each compaction daemon only watches the shard 
files that are hosted on the local node. Cheers,

Adam

> On Dec 11, 2017, at 2:37 PM, <[email protected]> 
> <[email protected]> wrote:
> 
> Hi Adam,
> 
> Thanks for the clarification.  I updated my config and it worked!
> 
> In the below example, I had included the compaction rules for all four nodes 
> in the config for one node.  In retrospect, this probably doesn't make sense 
> since I think the compaction daemon would only see its local shards.  So I'll 
> have each node's config only be aware of the full paths of the local shards.
> 
> Thanks again, hopefully a bug fix will be provided soon.
> 
> /mel
> 
> -----Original Message-----
> From: Adam Kocoloski [mailto:[email protected] 
> <mailto:[email protected]>] 
> Sent: Friday, December 08, 2017 10:57 PM
> To: [email protected] <mailto:[email protected]>
> Subject: Re: Automatic Compaction of _global_changes
> 
> Hi Melvin, right, it needs to be the full path as in my example below:
> 
>> shards/00000000-1fffffff/_global_changes.1512750761
> 
> i.e. you need to include the "shards/xx-yy/" piece as well.
> 
> It is a bit curious that you’ve got those 4 separate timestamps. Typically 
> you’d see all the shards with the same timestamp. Did you try to create the 
> _global_changes database multiple times or anything funny like that? Are each 
> of the associated files actually growing in size?
> 
> Cheers, Adam
> 
>> On Dec 8, 2017, at 3:59 PM, <[email protected]> 
>> <[email protected]> wrote:
>> 
>> I tried the workaround two different ways, and it doesn't seem to work 
>> either. I have a 4 node cluster, and I added this:
>> 
>> [compactions]
>> _global_changes.1483029052 = [{db_fragmentation, "70%"}]
>> _global_changes.1483029075 = [{db_fragmentation, "70%"}]
>> _global_changes.1483029126 = [{db_fragmentation, "70%"}]
>> _global_changes.1483029167 = [{db_fragmentation, "70%"}]
>> 
>> I also tried this:
>> 
>> [compactions]
>> _global_changes.1483029052.couch = [{db_fragmentation, "70%"}]
>> _global_changes.1483029075.couch = [{db_fragmentation, "70%"}]
>> _global_changes.1483029126.couch = [{db_fragmentation, "70%"}]
>> _global_changes.1483029167.couch = [{db_fragmentation, "70%"}]
>> 
>> -----Original Message-----
>> From: Lew, Melvin K 
>> Sent: Friday, December 08, 2017 3:03 PM
>> To: [email protected]
>> Subject: RE: Automatic Compaction of _global_changes
>> 
>> Thanks Adam! I've submitted the following issue: 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_couchdb_issues_1059&d=DwIFaQ&c=nulvIAQnC0yOOjC0e0NVa8TOcyq9jNhjZ156R-JJU10&r=jjIKqwApUzKCDY-o1Ex3afd4bksosuJsda_NnZShUUM&m=M9k47v0ZlCacpWf5wHjpTTKdCSfyAxA3hze_qxNWnjo&s=dd_ZZBHLIxhn2GdeTYu9H05dw4OK5XHNL73sUiQWGSA&e=
>>  
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_couchdb_issues_1059&d=DwIFaQ&c=nulvIAQnC0yOOjC0e0NVa8TOcyq9jNhjZ156R-JJU10&r=jjIKqwApUzKCDY-o1Ex3afd4bksosuJsda_NnZShUUM&m=M9k47v0ZlCacpWf5wHjpTTKdCSfyAxA3hze_qxNWnjo&s=dd_ZZBHLIxhn2GdeTYu9H05dw4OK5XHNL73sUiQWGSA&e=>
>> 
>> Thanks also for the tip on the security vulnerability. We'll upgrade to 
>> CouchDB 2.1.1 soon.  Fortunately this is an internal database on a 
>> firewalled corporate network so we have a bit more time.
>> 
>> -----Original Message-----
>> From: Adam Kocoloski [mailto:[email protected]] 
>> Sent: Friday, December 08, 2017 11:42 AM
>> To: [email protected]
>> Subject: Re: Automatic Compaction of _global_changes
>> 
>> Hiya Melvin, this looks like a bug. I think what’s happening is the 
>> compaction daemon is walking the list of database *shards* on the node and 
>> comparing those names directly against the keys in that config block. The 
>> shard files have internal names like
>> 
>> shards/00000000-1fffffff/_global_changes.1512750761
>> 
>> If you want to test this out you could look for the full path to one of your 
>> _global_changes shards and supply that as the key instead of just 
>> “_global_changes”. Repeating the config entry for every one of the shards 
>> could also be a workaround for you until we get this patched. Can you file 
>> an issue for it at 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_couchdb_issues-3F&d=DwIFaQ&c=nulvIAQnC0yOOjC0e0NVa8TOcyq9jNhjZ156R-JJU10&r=jjIKqwApUzKCDY-o1Ex3afd4bksosuJsda_NnZShUUM&m=XHjiBSrSt89_WhJ4-9-2UpcbGcs9zvxIpJN-QXPCNBI&s=dRWvHDgKxwCB7lPHOIDib7jmUy9wVGLuDcG0j0N3R7s&e=
>> 
>> By the way, releases prior to 1.7.1 and 2.1.1 have a fairly serious security 
>> vulnerability, it’d be good if you could upgrade. Cheers,
>> 
>> Adam
>> 
>>> On Dec 6, 2017, at 2:21 PM, [email protected] wrote:
>>> 
>>> Hi,
>>> 
>>> I'm using couchdb 2.0.0 on RHEL 7.2 and I'm looking to configure automatic 
>>> compaction of _global_changes but I can't seem to get it to work. I've 
>>> checked the file size and data size of the _global_changes database so I 
>>> know the criteria I've specified have been met. I don't get an error upon 
>>> couchdb startup, but nothing happens.  When I tried setting a _default 
>>> compaction rule, then compaction does happen for all databases including 
>>> _global_changes.  Any ideas? I hope I'm just missing something obvious. 
>>> Please let me know if any more detail is needed.
>>> 
>>> This is what I have in local.ini that does not work:
>>> [compactions]
>>> _global_changes = [{db_fragmentation, "70%"}]
>>> 
>>> Putting this into local.ini does work, but I don't want to compact all 
>>> databases:
>>> [compactions]
>>> _default = [{db_fragmentation, "70%"}]
>>> 
>>> For the purposes of my testing, I've also added:
>>> [compaction_daemon]
>>> check_interval = 30
>>> 
>>> Thanks in advance!

Reply via email to