Wow - thanks for the responses guys!
Adam for the helpful explanation - makes perfect sense now.
And Eric - whew - I would never have figured that out without your examples
(as you speculated - the ddoc was on a completely different node than the
one I was originally testing with).

Again, I really appreciate it!
Peyton

On Tue, Jul 26, 2016 at 11:26 PM, Adam Kocoloski <[email protected]>
wrote:

> Hi Peyton,
>
> It’s expected. The global_changes DB contains one document for every other
> database in the cluster. If you’re primarily writing to one database the
> associated doc in global_changes DB will have a ton of revisions and the
> shard hosting that doc will grow quickly. Other shards of global_changes
> won’t see the same growth. The good news as you've noticed is that it
> should also compact right back down.
>
> Cheers, Adam
>
> > On Jul 26, 2016, at 10:00 AM, Peyton Vaughn <[email protected]> wrote:
> >
> > Thank you sooo much Eric - I find examples, in absence of documentation,
> a
> > tremendous help - that was exactly what I needed.
> >
> > Turns out it's the "global_changes" database that's the culprit - but as
> > was expected, compaction fixes the disparity in storage usage.
> > Given that even global_changes is sharded, is it a concern at all that
> some
> > shards end up significantly larger than others? The most egregious
> example
> > from my 3-node cluster looks like:
> > 29G /usr/src/couchdb/dev/lib/node1/data/shards/00000000-1fffffff
> > 8.0G /usr/src/couchdb/dev/lib/node1/data/shards/c0000000-dfffffff
> > 510M /usr/src/couchdb/dev/lib/node1/data/shards/80000000-9fffffff
> > 508M /usr/src/couchdb/dev/lib/node1/data/shards/e0000000-ffffffff
> > 1.7G /usr/src/couchdb/dev/lib/node1/data/shards/40000000-5fffffff
> > 56K /usr/src/couchdb/dev/lib/node1/data/shards/60000000-7fffffff
> > 510M /usr/src/couchdb/dev/lib/node1/data/shards/20000000-3fffffff
> > 1.7G /usr/src/couchdb/dev/lib/node1/data/shards/a0000000-bfffffff
> > 42G /usr/src/couchdb/dev/lib/node1/data/shards
> >
> > Given that there is a global_changes DB in each shard, obviously not an
> > even distribution...
> >
> > But maybe this is known/welcome behavior... mainly including the above
> info
> > in case it's of interest to the 2.0 beta testing efforts.
> >
> >
> > If I could ask one more question: how do I trigger compaction on the
> > sharded views? Using the same base URLs that worked for DB compaction, I
> > tried appending '_compact/[design doc name]' which gets me
> > {"error":"not_found","reason":"missing"}, and I also tried hitting the
> > '/[DB]/_view_cleanup' endpoint, which gives me a longer
> > '{"error":"badmatch","reason":"{database_does_not_exist,\n
> > [{mem3_shards,load_shards_from_db....' response.
> >
> > Apologies if I'm overlooking something obvious.
> > Thanks again for the help,
> > peyton
> >
> >
> > On Mon, Jul 25, 2016 at 11:29 AM, Eiri <[email protected]> wrote:
> >
> >>
> >> Hey Peyton,
> >>
> >> Here is the example. First, get a list of all the shards from admin port
> >> (15986)
> >>
> >> http :15986/_all_dbs
> >> [
> >>    “_replicator”,
> >>    “_users”,
> >>    “dbs”,
> >>    “shards/00000000-ffffffff/koi.1469199178”
> >> ]
> >>
> >> You are interested in the databases with “shards” prefix and need to run
> >> usual compaction on each of them. The only catch is that the name have
> to
> >> be url encoded. So in my case:
> >>
> >> $ http post :15986/shards%2F00000000-ffffffff%2Fkoi.1469199178/_compact
> >> content-type:application/json
> >> {
> >>    “ok”: true
> >> }
> >>
> >> Mind that content-type have to be specified. And of course it need to be
> >> ran on all the nodes, admin interface not clustered, i.e. the API
> commands
> >> will not be carried across cluster.
> >>
> >> Regards,
> >> Eric
> >>
> >>
> >>> On Jul 25, 2016, at 12:04 PM, Peyton Vaughn <[email protected]>
> wrote:
> >>>
> >>> Apologies - bad copy paste - I'm doing this against port 15986. (All
> >> nodes
> >>> in the cluster are 1598[46], since they are not in a single container).
> >>> ~>curl -H "Content-Type: application/json" -X POST '
> >>> http://localhost:15986/shards&#47;00000000-1fffffff/_compact' --user
> >>> admin:wacit
> >>> {"error":"illegal_database_name","reason":"Name: 'shards&'. Only
> >> lowercase
> >>> characters (a-z), digits (0-9), and any of the characters _, $, (, ),
> +,
> >> -,
> >>> and / are allowed. Must begin with a letter."}
> >>> ~>curl -H "Content-Type: application/json" -X POST '
> >>> http://localhost:15986/shards/00000000-1fffffff/_compact' --user
> >> admin:wacit
> >>> {"error":"not_found","reason":"no_db_file"}
> >>> ~>curl -H "Content-Type: application/json" -X POST '
> >>> http://localhost:15986/shards\/00000000-1fffffff/_compact' --user
> >>> admin:wacit
> >>> {"error":"illegal_database_name","reason":"Name: 'shards\\'. Only
> >> lowercase
> >>> characters (a-z), digits (0-9), and any of the characters _, $, (, ),
> +,
> >> -,
> >>> and / are allowed. Must begin with a letter."}
> >>> ~>curl -H "Content-Type: application/json" -X POST '
> >>> http://localhost:15986/staging_inventory/_compact' --user admin:wacit
> >>> {"error":"not_found","reason":"no_db_file"}
> >>>
> >>> Is it possible to get an example?
> >>>
> >>> On Mon, Jul 25, 2016 at 10:58 AM, Jan Lehnardt <[email protected]> wrote:
> >>>
> >>>>
> >>>>> On 25 Jul 2016, at 16:35, Peyton Vaughn <[email protected]> wrote:
> >>>>>
> >>>>> I'm afraid I must echo Teo's question: how do I run compaction at the
> >>>> shard
> >>>>> level?
> >>>>>
> >>>>> Fauxton lists all of my shards as:
> >>>>>
> >>>>> shards/00000000-1fffffff/_global_changes.1469456629 This database
> >> failed
> >>>> to
> >>>>> load.
> >>>>>
> >>>>> So interaction there doesn't seem to be an option.
> >>>>> I attempted to use curl, as outlined in the documentation:
> >>>>>
> >>>>> curl -XPOST  http://localhost:15984/????/_compact
> >>>>>
> >>>>> But I cannot figure out the correct database name to provide. All of
> my
> >>>>> attempts result in a "not_found" or a "illegal_database_name" error
> >>>>> returned.
> >>>>
> >>>> The answer is the same, quoth @rnewson:
> >>>>
> >>>>> You'll need to do so on port 5986, the node-local interface.
> >>>>
> >>>> That is [1-3]5986 in your dev cluster case.
> >>>>
> >>>> Best
> >>>> Jan
> >>>> --
> >>>>
> >>>>>
> >>>>> peyton
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Sat, Jul 23, 2016 at 2:32 PM, Robert Newson <[email protected]>
> >>>> wrote:
> >>>>>
> >>>>>> You'll need to do so on port 5986, the node-local interface.
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>>> On 23 Jul 2016, at 07:15, Constantin Teodorescu <
> [email protected]
> >>>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>>> On Sat, Jul 23, 2016 at 12:47 AM, Robert Newson <
> [email protected]
> >>>
> >>>>>> wrote:
> >>>>>>>>
> >>>>>>>> Are you updating one doc over and over? That's my inference. Also
> >>>> you'll
> >>>>>>>> need to run compaction on all shards then look at the distribution
> >>>>>>>> afterward.
> >>>>>>>
> >>>>>>> How do I run compaction on all shards?
> >>>>>>> On Fauxton UI I didn't found anywhere any button for database or
> view
> >>>>>>> compaction! :-(
> >>>>>>>
> >>>>>>> Teo
> >>>>>>
> >>>>>>
> >>>>
> >>>> --
> >>>> Professional Support for Apache CouchDB:
> >>>> https://neighbourhood.ie/couchdb-support/
> >>>>
> >>>>
> >>
> >>
>
>

Reply via email to