Hi Ruben,

> error doing so (HTTP copy only supports 2GB [1]). Therefore now we
> deploy our agents with the workspace, serviceregistry and worker
> profiles. The files are ingested via HTTP [2] and once in the shared
> storage, there are no big copies of files (it's everything in the
> same share). I've just realised that this could be improved by

Ah, so the problem we run into is that the "shared storage" is actually
an NFS drive so "shared storage" means "download the file lots".  This
is because our VMs have no local storage, and all drive access means
hitting an iscsi setup.  Your solution doesn't have this limitation,
since the agents will push the data into the shared store, one agent
(maybe the same one, maybe not) will request the data, then all
operations will be on the local host.

Matterhorn loves it's disk usage.  Not awesome for virtualized
environments, where disk usage is also *usually* virtualized.

> [2] If arbitrarily large files are ingested via HTTP, why is [1]
> failing? Or we cannot ingest > 2GB files via HTTP?

We download and ingest files greater than 2GB via http.  This bug
appears to continue to haunt you!

Regards,

Chris
-- 
Christopher Brooks, BSc, MSc
ARIES Laboratory, University of Saskatchewan

Web: http://www.cs.usask.ca/~cab938
Phone: 1.306.966.1442
Mail: Advanced Research in Intelligent Educational Systems Laboratory
     Department of Computer Science
     University of Saskatchewan
     176 Thorvaldson Building
     110 Science Place
     Saskatoon, SK
     S7N 5C9
_______________________________________________
Matterhorn-users mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

Reply via email to