On Mon, 2015-12-28 at 16:59 +0200, Petko Manolov wrote:
> On 15-12-28 09:42:22, Mimi Zohar wrote:
> > On Mon, 2015-12-28 at 16:29 +0200, Petko Manolov wrote:
> > >
> > > I kind of wonder isn't it possible to optimize the file read? If the
> > > file
> > > is relatively small (a few megabytes, for example) it will fit into any
> > > modern system's memory. At least those that cares to run IMA, i mean.
> > >
> > > Fetching file page by page is a slow process even though the BIO
> > > subsystem
> > > reads larger chunks off the real storage devices. Has anyone done a
> > > benchmark test?
> > Dmitry recently added asynchronous hash (ahash) support, which allows HW
> > crypto acceleration, for calculating the file hash.
> This is nice. However, i was referring to reading a file page by page vs
> (a couple of megabytes) chunks. Is this also covered by the ahash support?
Yes, basically it attempts to allocate a buffer for the entire file.
On failure, ahash attempts to allocate two buffers larger than
PAGE_SIZE, but falls back to using just PAGE_SIZE. Refer to
ima_alloc_pages() for a full description.
When two buffers are allocated, while waiting for one hash to complete,
the subsequent file read is read into the other buffer.
To unsubscribe from this list: send the line "unsubscribe
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html