Appending ddoc's name to _compact is the right way to trigger view’s 
compaction, but the catch here is that not every shard has the ddoc, it is 
distributed the same as the rest of the docs. I guess the simplest way to find 
where it ended up would be just to check the shards one by one, e.g.


$ http head :15986/shards%2F00000000-1fffffff%2Fkoi.1469574936/_design/map 
—print=h
HTTP/1.1 404 Object Not Found
Cache-Control: must-revalidate
Content-Length: 41
Content-Type: application/json
Date: Wed, 27 Jul 2016 02:33:10 GMT
Server: CouchDB/2f32166 (Erlang OTP/18)

$ http head :15986/shards%2Fc0000000-dfffffff%2Fkoi.1469574936/_design/map 
—print=h
HTTP/1.1 200 OK
Cache-Control: must-revalidate
Content-Length: 340
Content-Type: application/json
Date: Wed, 27 Jul 2016 02:33:26 GMT
ETag: “1-552e215b76ed16b116d7b50f8d50c9a0”
Server: CouchDB/2f32166 (Erlang OTP/18)

$ http post :15986/shards%2Fc0000000-dfffffff%2Fkoi.1469574936/_compact/map 
content-type:application/json
{
    “ok”: true
}



Regards,
Eric



> On Jul 26, 2016, at 11:00 AM, Peyton Vaughn <[email protected]> wrote:
> 
> Thank you sooo much Eric - I find examples, in absence of documentation, a
> tremendous help - that was exactly what I needed.
> 
> Turns out it's the "global_changes" database that's the culprit - but as
> was expected, compaction fixes the disparity in storage usage.
> Given that even global_changes is sharded, is it a concern at all that some
> shards end up significantly larger than others? The most egregious example
> from my 3-node cluster looks like:
> 29G /usr/src/couchdb/dev/lib/node1/data/shards/00000000-1fffffff
> 8.0G /usr/src/couchdb/dev/lib/node1/data/shards/c0000000-dfffffff
> 510M /usr/src/couchdb/dev/lib/node1/data/shards/80000000-9fffffff
> 508M /usr/src/couchdb/dev/lib/node1/data/shards/e0000000-ffffffff
> 1.7G /usr/src/couchdb/dev/lib/node1/data/shards/40000000-5fffffff
> 56K /usr/src/couchdb/dev/lib/node1/data/shards/60000000-7fffffff
> 510M /usr/src/couchdb/dev/lib/node1/data/shards/20000000-3fffffff
> 1.7G /usr/src/couchdb/dev/lib/node1/data/shards/a0000000-bfffffff
> 42G /usr/src/couchdb/dev/lib/node1/data/shards
> 
> Given that there is a global_changes DB in each shard, obviously not an
> even distribution...
> 
> But maybe this is known/welcome behavior... mainly including the above info
> in case it's of interest to the 2.0 beta testing efforts.
> 
> 
> If I could ask one more question: how do I trigger compaction on the
> sharded views? Using the same base URLs that worked for DB compaction, I
> tried appending '_compact/[design doc name]' which gets me
> {"error":"not_found","reason":"missing"}, and I also tried hitting the
> '/[DB]/_view_cleanup' endpoint, which gives me a longer
> '{"error":"badmatch","reason":"{database_does_not_exist,\n
> [{mem3_shards,load_shards_from_db....' response.
> 
> Apologies if I'm overlooking something obvious.
> Thanks again for the help,
> peyton
> 
> 
> On Mon, Jul 25, 2016 at 11:29 AM, Eiri <[email protected]> wrote:
> 
>> 
>> Hey Peyton,
>> 
>> Here is the example. First, get a list of all the shards from admin port
>> (15986)
>> 
>> http :15986/_all_dbs
>> [
>>    “_replicator”,
>>    “_users”,
>>    “dbs”,
>>    “shards/00000000-ffffffff/koi.1469199178”
>> ]
>> 
>> You are interested in the databases with “shards” prefix and need to run
>> usual compaction on each of them. The only catch is that the name have to
>> be url encoded. So in my case:
>> 
>> $ http post :15986/shards%2F00000000-ffffffff%2Fkoi.1469199178/_compact
>> content-type:application/json
>> {
>>    “ok”: true
>> }
>> 
>> Mind that content-type have to be specified. And of course it need to be
>> ran on all the nodes, admin interface not clustered, i.e. the API commands
>> will not be carried across cluster.
>> 
>> Regards,
>> Eric
>> 
>> 
>>> On Jul 25, 2016, at 12:04 PM, Peyton Vaughn <[email protected]> wrote:
>>> 
>>> Apologies - bad copy paste - I'm doing this against port 15986. (All
>> nodes
>>> in the cluster are 1598[46], since they are not in a single container).
>>> ~>curl -H "Content-Type: application/json" -X POST '
>>> http://localhost:15986/shards&#47;00000000-1fffffff/_compact' --user
>>> admin:wacit
>>> {"error":"illegal_database_name","reason":"Name: 'shards&'. Only
>> lowercase
>>> characters (a-z), digits (0-9), and any of the characters _, $, (, ), +,
>> -,
>>> and / are allowed. Must begin with a letter."}
>>> ~>curl -H "Content-Type: application/json" -X POST '
>>> http://localhost:15986/shards/00000000-1fffffff/_compact' --user
>> admin:wacit
>>> {"error":"not_found","reason":"no_db_file"}
>>> ~>curl -H "Content-Type: application/json" -X POST '
>>> http://localhost:15986/shards\/00000000-1fffffff/_compact' --user
>>> admin:wacit
>>> {"error":"illegal_database_name","reason":"Name: 'shards\\'. Only
>> lowercase
>>> characters (a-z), digits (0-9), and any of the characters _, $, (, ), +,
>> -,
>>> and / are allowed. Must begin with a letter."}
>>> ~>curl -H "Content-Type: application/json" -X POST '
>>> http://localhost:15986/staging_inventory/_compact' --user admin:wacit
>>> {"error":"not_found","reason":"no_db_file"}
>>> 
>>> Is it possible to get an example?
>>> 
>>> On Mon, Jul 25, 2016 at 10:58 AM, Jan Lehnardt <[email protected]> wrote:
>>> 
>>>> 
>>>>> On 25 Jul 2016, at 16:35, Peyton Vaughn <[email protected]> wrote:
>>>>> 
>>>>> I'm afraid I must echo Teo's question: how do I run compaction at the
>>>> shard
>>>>> level?
>>>>> 
>>>>> Fauxton lists all of my shards as:
>>>>> 
>>>>> shards/00000000-1fffffff/_global_changes.1469456629 This database
>> failed
>>>> to
>>>>> load.
>>>>> 
>>>>> So interaction there doesn't seem to be an option.
>>>>> I attempted to use curl, as outlined in the documentation:
>>>>> 
>>>>> curl -XPOST  http://localhost:15984/????/_compact
>>>>> 
>>>>> But I cannot figure out the correct database name to provide. All of my
>>>>> attempts result in a "not_found" or a "illegal_database_name" error
>>>>> returned.
>>>> 
>>>> The answer is the same, quoth @rnewson:
>>>> 
>>>>> You'll need to do so on port 5986, the node-local interface.
>>>> 
>>>> That is [1-3]5986 in your dev cluster case.
>>>> 
>>>> Best
>>>> Jan
>>>> --
>>>> 
>>>>> 
>>>>> peyton
>>>>> 
>>>>> 
>>>>> 
>>>>> On Sat, Jul 23, 2016 at 2:32 PM, Robert Newson <[email protected]>
>>>> wrote:
>>>>> 
>>>>>> You'll need to do so on port 5986, the node-local interface.
>>>>>> 
>>>>>> Sent from my iPhone
>>>>>> 
>>>>>>> On 23 Jul 2016, at 07:15, Constantin Teodorescu <[email protected]
>>> 
>>>>>> wrote:
>>>>>>> 
>>>>>>>> On Sat, Jul 23, 2016 at 12:47 AM, Robert Newson <[email protected]
>>> 
>>>>>> wrote:
>>>>>>>> 
>>>>>>>> Are you updating one doc over and over? That's my inference. Also
>>>> you'll
>>>>>>>> need to run compaction on all shards then look at the distribution
>>>>>>>> afterward.
>>>>>>> 
>>>>>>> How do I run compaction on all shards?
>>>>>>> On Fauxton UI I didn't found anywhere any button for database or view
>>>>>>> compaction! :-(
>>>>>>> 
>>>>>>> Teo
>>>>>> 
>>>>>> 
>>>> 
>>>> --
>>>> Professional Support for Apache CouchDB:
>>>> https://neighbourhood.ie/couchdb-support/
>>>> 
>>>> 
>> 
>> 

Reply via email to