Hi Gil,
(we spoke in Dublin, didn't we?)

Short of stopping Solr I have a feeling there isn't much you can
do.... hm..... or, I wonder if you could somehow get a thread dump,
get the PID of the thread (since I believe threads in Linux are run as
processes), and then kill that thread... Feels scary and I'm not sure
what this might do to the index, but maybe somebody else can jump in
and comment on this approach or suggest a better one.

Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/


On Mon, Nov 11, 2013 at 10:44 AM, Hoggarth, Gil <gil.hogga...@bl.uk> wrote:
> We have an internal Solr collection with ~1 billion documents. It's
> split across 24 shards and uses ~3.2TB of disk space. Unfortunately
> we've triggered an 'optimize' on the collection (via a restarted browser
> tab), which has raised the disk usage to 4.6TB, with 130GB left on the
> disk volume.
>
>
>
> As I fully expect Solr to use up all of the disk space as the collection
> is more than 50% of the disk volume, how can I cancel this optimize? And
> separately, if I were to reissue with maxSegments=(high number, eg 40),
> should I still expect the same disk usage? (I'm presuming so as doesn't
> it need to gather the whole index to determine which docs should go into
> which segments?)
>
>
>
> Solr 4.4 on RHEL6.4, 160GB RAM, 5GB per shard.
>
>
>
> (Great conference last week btw - so much to learn!)
>
>
>
>
>
> Gil Hoggarth
>
> Web Archiving Technical Services Engineer
>
> The British Library, Boston Spa, West Yorkshire, LS23 7BQ
>
> Tel: 01937 546163
>
>
>

Reply via email to