Karl,

Thanks for the information.
That was indeed the issue.
I have setup daily vacuuming and table analysis.


Regards


Damien Collis
Team Leader – Systems Integration
Technology & Innovation Division, Link Group

Level 3, 1A Homebush Bay Drive, Rhodes NSW 2138
T+61 2 9375 7909

[email protected]<mailto:[email protected]> 
www.linkgroup.com<http://www.linkgroup.com/>

From: Karl Wright <[email protected]>
Sent: Friday, 28 September 2018 9:41 AM
To: [email protected]
Subject: Re: Status and Job Management

Hi Damien,

I basically wanted to see if you were using Postgresql.

Postgresql is not very efficient in counting records; it's one of Postgresql's 
flaws as a database.  So in ManifoldCF, we limit the number of records counted 
on the status page, displaying ">xxxx" where xxxx would be the limit.  The 
limit is settable in properties.xml; you can find the property on the 
"how-to-build-and-deploy" page.

So, there are three possibilities.  The first possibility is that you have a 
jobqueue table that is in need of analysis.  MCF does analysis automatically as 
tables grow, but it's possible that you might see some benefit in the plan by 
analyzing that table.  You can tell if this has happened because the 
manifoldcf.log will have log messages due to long-running queries, including 
plan information.

The second possibility is that your database needs to be vacuumed.  Postgresql 
needs periodic maintenance.  There is automatic vacuuming that takes place, but 
I suggest shutting manifoldcf down and running "VACUUM FULL" once in a while to 
clean things up.

The third possibility is that you have a very large number of large jobs.  If 
this is true, you might want to decrease the limit parameter mentioned above to 
improve time spent rendering the jobstatus page.

Thanks,
Karl


On Thu, Sep 27, 2018 at 7:20 PM Damien Collis 
<[email protected]<mailto:[email protected]>> wrote:
Karl,

I have a single Windows 2012 standalone environment hosting ManifoldCf, Solr 
and Postgress Db:
4 x Xeon E5-2660 cores
16GB Physical Memory  (2GB Tomcat/ManifoldCf, 12GB Solr)


I have attached my D:\ProgramFiles\PostgreSQL\9.3\data\postgresql.conf file. 
For convenience here are the parameters that are specifically mentioned in 
https://manifoldcf.apache.org/release/release-2.10/en_US/how-to-build-and-deploy.html#Configuring+a+PostgreSQL+database<https://urldefense.proofpoint.com/v2/url?u=https-3A__manifoldcf.apache.org_release_release-2D2.10_en-5FUS_how-2Dto-2Dbuild-2Dand-2Ddeploy.html-23Configuring-2Ba-2BPostgreSQL-2Bdatabase&d=DwMFaQ&c=EyrAshB9xIzcegaT9SDe6g&r=Gn5yxeb6W9ERepUyEmssft7I4Tobgyxsu0tR69ePkS8&m=wsf5GiYbEFE_xfywykcgMJa4BwMbx7hvPJUzIjrnyf8&s=uun54beuIPh0TJDdEofoU3i1fxlMz3gtTM671kk76K0&e=>

standard_conforming_strings = on
shared_buffers = 1024MB                                            # min 128kB
#checkpoint_segments = 300                      # in logfile segments, min 1, 
16MB each
maintenance_work_mem = 2MB
# NOT FOUND tcpip_socket         true
checkpoint_timeout = 15min                      # range 30s-1h
datestyle = 'ISO,European'
autovacuum = on

I hope this is the information you require.

Regards


Damien Collis
Team Leader – Systems Integration
Technology & Innovation Division, Link Group

Level 3, 1A Homebush Bay Drive, Rhodes NSW 2138
T+61 2 9375 7909

[email protected]<mailto:[email protected]> 
www.linkgroup.com<http://www.linkgroup.com/>

From: Karl Wright <[email protected]<mailto:[email protected]>>
Sent: Thursday, 27 September 2018 5:10 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: Status and Job Management

Hi Damien,

Can you describe your database setup?

Karl


On Thu, Sep 27, 2018 at 1:50 AM Damien Collis 
<[email protected]<mailto:[email protected]>> wrote:
All,

I am currently having trouble loading the “Status and Job Management” web page. 
I have set up a new Job but am unable to start it.

Sometimes the “Status and Job Management”  will finally respond (> 30 mins), 
but by then I have moved on, and when I finally check back and see if it has 
rendered, my session has timed out.
I know my setup is a little under spec for what I am indexing/crawling but I 
have no requirement to improve its functional performance, I am only having an 
issue from an administrative perspective.

Any pointers would be greatly appreciated.


Regards


Damien Collis
Team Leader – Systems Integration
Technology & Innovation Division, Link Group

Level 3, 1A Homebush Bay Drive, Rhodes NSW 2138
T+61 2 9375 7909

[email protected]<mailto:[email protected]> 
www.linkgroup.com<http://www.linkgroup.com/>

Reply via email to