On Mon, Jul 21, 2025 at 7:37 PM Qu Wenruo wrote:
>
>
>
> 在 2025/7/21 19:55, Jan Kara 写道:
> > On Mon 21-07-25 11:14:02, Gao Xiang wrote:
> >> Hi Barry,
> >>
> >> On 2025/7/21 09:02, Barry Song wrote:
> >>> On Wed, Jul 16, 2025 at 8:28 AM Gao Xiang
> >>> wrote:
> [...]
> >>> Given the difficulty o
On 2025/7/21 19:36, Qu Wenruo wrote:
在 2025/7/21 19:55, Jan Kara 写道:
On Mon 21-07-25 11:14:02, Gao Xiang wrote:
Hi Barry,
On 2025/7/21 09:02, Barry Song wrote:
On Wed, Jul 16, 2025 at 8:28 AM Gao Xiang wrote:
[...]
Given the difficulty of allocating large folios, it's always a good
ide
Hi Jan,
On 2025/7/21 18:25, Jan Kara wrote:
On Mon 21-07-25 11:14:02, Gao Xiang wrote:
Hi Barry,
On 2025/7/21 09:02, Barry Song wrote:
On Wed, Jul 16, 2025 at 8:28 AM Gao Xiang wrote:
...
... high-order folios can cause side effects on embedded devices
like routers and IoT devices, wh
在 2025/7/21 19:55, Jan Kara 写道:
On Mon 21-07-25 11:14:02, Gao Xiang wrote:
Hi Barry,
On 2025/7/21 09:02, Barry Song wrote:
On Wed, Jul 16, 2025 at 8:28 AM Gao Xiang wrote:
[...]
Given the difficulty of allocating large folios, it's always a good
idea to have order-0 as a fallback. While I
On Mon 21-07-25 11:14:02, Gao Xiang wrote:
> Hi Barry,
>
> On 2025/7/21 09:02, Barry Song wrote:
> > On Wed, Jul 16, 2025 at 8:28 AM Gao Xiang
> > wrote:
> > >
>
> ...
>
> > >
> > > ... high-order folios can cause side effects on embedded devices
> > > like routers and IoT devices, which sti
Hi Barry,
On 2025/7/21 09:02, Barry Song wrote:
On Wed, Jul 16, 2025 at 8:28 AM Gao Xiang wrote:
...
... high-order folios can cause side effects on embedded devices
like routers and IoT devices, which still have MiBs of memory (and I
believe this won't change due to their use cases) but
On Wed, Jul 16, 2025 at 8:28 AM Gao Xiang wrote:
>
>
>
> On 2025/7/16 07:32, Gao Xiang wrote:
> > Hi Matthew,
> >
> > On 2025/7/16 04:40, Matthew Wilcox wrote:
> >> I've started looking at how the page cache can help filesystems handle
> >> compressed data better. Feedback would be appreciated!
On Wed, Jul 16, 2025 at 7:32 AM Gao Xiang wrote:
[...]
>
> I don't see this will work for EROFS because EROFS always supports
> variable uncompressed extent lengths and that will break typical
> EROFS use cases and on-disk formats.
>
> Other thing is that large order folios (physical consecutive)
On 2025/7/17 10:49, Eric Biggers wrote:
On Wed, Jul 16, 2025 at 11:37:28PM +0100, Phillip Lougher wrote:
...
buffer. I suspect that vmap() (or vm_map_ram() which is what f2fs uses)
is actually more efficient than these streaming APIs, since it avoids
the internal copy. But it would need
On Wed, Jul 16, 2025 at 11:37:28PM +0100, Phillip Lougher wrote:
> > There also seems to be some discrepancy between filesystems whether the
> > decompression involves vmap() of all the memory allocated or whether the
> > decompression routines can handle doing kmap_local() on individual pages.
> >
Dear Mr.Matthew and other fs developers:
I'm very sorry.My gmail maybe be blocked for reasons I don't know.I have to
change
my email domain.
> So, my proposal is that filesystems tell the page cache that their minimu=
m
> folio size is the compression block size. That seems to be around 64k,
> so
On 15/07/2025 21:40, Matthew Wilcox wrote:
I've started looking at how the page cache can help filesystems handle
compressed data better. Feedback would be appreciated! I'll probably
say a few things which are obvious to anyone who knows how compressed
files work, but I'm trying to be explic
On 2025/7/16 12:54, Qu Wenruo wrote:
在 2025/7/16 10:46, Gao Xiang 写道:
...
There's some discrepancy between filesystems whether you need scratch
space for decompression. Some filesystems read the compressed data into
the pagecache and decompress in-place, while other filesystems read th
在 2025/7/16 10:46, Gao Xiang 写道:
...
There's some discrepancy between filesystems whether you need scratch
space for decompression. Some filesystems read the compressed data into
the pagecache and decompress in-place, while other filesystems read the
compressed data into scratch pages and
Dear Matthew and other filesystem developers,
I've been experimenting with implementing large folio support for
compressed files in F2FS locally, and I'd like to describe the
situation from the F2FS perspective.
> First, I believe that all filesystems work by compressing fixed-size
> plaintext in
...
There's some discrepancy between filesystems whether you need scratch
space for decompression. Some filesystems read the compressed data into
the pagecache and decompress in-place, while other filesystems read the
compressed data into scratch pages and decompress into the page cache.
B
在 2025/7/16 06:10, Matthew Wilcox 写道:
I've started looking at how the page cache can help filesystems handle
compressed data better. Feedback would be appreciated! I'll probably
say a few things which are obvious to anyone who knows how compressed
files work, but I'm trying to be explicit abo
On 2025/7/16 07:32, Gao Xiang wrote:
Hi Matthew,
On 2025/7/16 04:40, Matthew Wilcox wrote:
I've started looking at how the page cache can help filesystems handle
compressed data better. Feedback would be appreciated! I'll probably
say a few things which are obvious to anyone who knows how c
Hi Matthew,
On 2025/7/16 04:40, Matthew Wilcox wrote:
I've started looking at how the page cache can help filesystems handle
compressed data better. Feedback would be appreciated! I'll probably
say a few things which are obvious to anyone who knows how compressed
files work, but I'm trying to
On Tue, Jul 15, 2025 at 09:40:42PM +0100, Matthew Wilcox wrote:
> I've started looking at how the page cache can help filesystems handle
> compressed data better. Feedback would be appreciated! I'll probably
> say a few things which are obvious to anyone who knows how compressed
> files work, but
I've started looking at how the page cache can help filesystems handle
compressed data better. Feedback would be appreciated! I'll probably
say a few things which are obvious to anyone who knows how compressed
files work, but I'm trying to be explicit about my assumptions.
First, I believe that
21 matches
Mail list logo