Hi Bisonti, Will you please explain other configurations of the system which you are using and number of worker threads and Max DB connections or any other related parameters? Actually I am also facing similar type of issue of slow crawling the job. That will be really helpful for me.
Thanks And Regards, Nikita On Mon, Jan 28, 2019 at 2:19 PM Bisonti Mario <[email protected]> wrote: > I read that vacuum full isn’t good for Postgres version >=9.3 so I don’t > execute it. > > Today, after the run of the weekly “vacuumdb --all –analyze” I see that > job finished in 8 hours and half, so less of the last execution. > > I will monitor in the next day if job will finish in more time or not. > > > > I could think to execute daily > > “vacuumdb --all –analyze” > > if it could help me. > > > > > > > > *Da:* Karl Wright <[email protected]> > *Inviato:* venerdì 25 gennaio 2019 17:39 > *A:* [email protected] > *Oggetto:* Re: Job slower > > > > Did you try 'vacuum full'? > > > > Karl > > > > > > On Fri, Jan 25, 2019 at 3:47 AM Bisonti Mario <[email protected]> > wrote: > > Hallo. > > I use MCF 2.12 and postgresql 9.3.25 Solr 7.6 Tika 1.19 on Ubuntu Server > 18.04 > > > > Weekly I scheduled by crontab for the user postgres : > > 15 8 * * Sun vacuumdb --all --analyze > > 20 10 * * Sun reindexdb postgres > > 25 10 * * Sun reindexdb dbname > > > > I see that the job that indexes 700000 documents daily, runs slower day by > day. > > It run 8 hours a few of week ago, but now it runs in 12 hours and the > number of documents are not changed too much. > > > > What could I do to speed up the job? > > > > Thanks a lot > > Mario > > -- Thanks and Regards, Nikita Email: [email protected] United Sources Service Pvt. Ltd. a "Smartshore" Company Mobile: +91 99 888 57720 http://www.smartshore.nl
