The long version:
It might seem counterintuitive but I've seen MORE shared disk I/O when 
workers used local storage.  I believe this is because an additional copy 
of the workspace needs to be maintained for each worker on their local 
drive.  I see peak disk I/O is when a media package is first accepted and 
ingest starts, the zip file is unpacked and the pieces are put in their 
various areas to be processed.  After that point most of what the workers 
are doing must be in memory with minimal file reads and writes as I see 
comparatively very little disk I/O.

The short version:
Using local disk for workers seems to require more disk activity during 
the times when disk I/O is already in high demand.

When I configured workers to use their own disk I deployed them with the 
-Pworker,serviceregistry,workspace-stub profile and configured 
org.opencastproject.workspace.rootdir to be local disk, like 
/mnt/encoding.



> Steven M Lichti <[email protected]> 
> 
> I want to do the encoding on my worker VMs on local drives. I 
> received a 50GB /dev/sdb1 device on each of two workers, mounted at 
> /mnt/encoding.
> 
> I'm going through my configuration files now, and I would like to 
> have the encoding workspace point to /mnt/encoding. However, since 
> only the workers appear to need this (and PLEASE correct me if I'm 
> wrong), only the workers will need to have this setting:
> org.opencastproject.storage.dir=/opt/matterhorn/content
> 
> /opt/matterhorn/content is a NFS link to shared storage, but it's 
> crazy-slow to encode across the network.
> 
> My goal is to do the encoding locally (on the workers) in /mnt/
> encoding, then publish to /opt/matterhorn/content, where the 
> distribution/engage server can get at the files.
> 
> There is also the setting: 
> org.opencastproject.download.directory=$
> {org.opencastproject.storage.dir}/downloads
> 
> If I change the org.opencastproject.storage.dir variable on the 
> workers to /mnt/encoding, then set the downloads directory to /opt/
> matterhorn/content/downloads, would that do the trick?
> 
> Please let me know what you all think…
> 
> Thank you!
> 
> --Steven.
_______________________________________________
Matterhorn-users mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

Reply via email to