[ My last email of the night, as it's our anniversary, and I'm off to dinner now ;-) ]
On Thu, 28 Aug 2025 15:10:52 -0700 Linus Torvalds <torva...@linux-foundation.org> wrote: > On Thu, 28 Aug 2025 at 14:17, Steven Rostedt <rost...@kernel.org> wrote: > > > > But that's unique per task, right? What I liked about the f_inode > > pointer, is that it appears to be shared between tasks. > > I actually think the local meaning of the file pointer is an advantage. > > It not only means that you see the difference in mappings of the same > file created with different open calls, it also means that when > different processes mmap the same executable, they don't see the same > hash. > > And because the file pointer doesn't have any long-term meaning, it > also means that you also can't make the mistake of thinking the hash > has a long lifetime. With an inode pointer hash, you could easily have > software bugs that end up not realizing that it's a temporary hash, > and that the same inode *will* get two different hashes if the inode > has been flushed from memory and then loaded anew due to memory > pressure. This is a reasonable argument. But it is still nice to have the same value for all tasks. This is for a "file_cache" that does get flushed regularly (when various changes happen to the tracefs system). It's only purpose is to map the user space stack trace hash value to a path name (and build-id). But yeah, I do not want another file to get flagged with the same hash. > > > I only want to add a new hash and print the path for a new file. If > > several tasks are using the same file (which they are with the > > libraries), then having the hash be the same between tasks would be > > more efficient. > > Why? See above why I think it's a mistake to think those hashes have > lifetimes. They don't. Two different inodes can have the same hash due > to lifetime issues, and the same inode can get two different hashes at > different times for the same reason. > > So you *need* to tie these things to the only lifetime that matters: > the open/close pair (and the mmap - and the stack traces - will be > part of that lifetime). > > I literally think that you are not thinking about this right if you > think you can re-use the hash. I'm just worried about this causing slow downs, especially if I also track the buildid. I did a quick update to the code to first use the f_inode and get the build_id, and it gives: trace-cmd-1012 [003] ...1. 35.247318: inode_cache: hash=0xcb214087 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} trace-cmd-1012 [003] ...1. 35.247333: inode_cache: hash=0x2565194a path=/usr/local/bin/trace-cmd build_id={0x3f399e26,0xf9eb2d4d,0x475fa369,0xf5bb7eeb,0x6244ae85} trace-cmd-1012 [003] ...1. 35.247419: inode_cache: hash=0x22dca920 path=/usr/local/lib64/libtracefs.so.1.8.2 build_id={0x6b040bdb,0x961f23d6,0xc1e1027e,0x7067c348,0xd069fa67} trace-cmd-1012 [003] ...1. 35.247455: inode_cache: hash=0xe87b6ea5 path=/usr/local/lib64/libtraceevent.so.1.8.4 build_id={0x8946b4eb,0xe3bf4ec5,0x11fd7d86,0xcd3105e2,0xe44a8d4d} trace-cmd-1012 [003] ...1. 35.247488: inode_cache: hash=0xafc34117 path=/usr/lib/x86_64-linux-gnu/libzstd.so.1.5.7 build_id={0x379dc873,0x32bbdbc4,0x91eeb6cf,0xba549730,0xe2b96c55} bash-1003 [001] ...1. 35.248508: inode_cache: hash=0xcf9bd2d6 path=/usr/bin/bash build_id={0xd94aa36d,0x8e1f19c7,0xa4a69446,0x7338f602,0x20d66357} NetworkManager-581 [004] ...1. 35.703993: inode_cache: hash=0xea1c3e22 path=/usr/sbin/NetworkManager build_id={0x278c6dbb,0x4a1cdde6,0xa1a30a2c,0xbc417464,0x9dfaa28e} bash-1003 [001] ...1. 35.904817: inode_cache: hash=0x133252fa path=/usr/lib/x86_64-linux-gnu/libtinfo.so.6.5 build_id={0xff2193a5,0xb2ece2f1,0x1bcbd242,0xca302a0b,0xc155fd26} bash-1013 [004] ...1. 37.716435: inode_cache: hash=0x53ae379b path=/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 build_id={0x4ed9e462,0xb302cd84,0x3ccf0104,0xbd80ac72,0x91c7fd44} bash-1013 [004] ...1. 37.722923: inode_cache: hash=0xa55a259e path=/usr/lib/x86_64-linux-gnu/libz.so.1.3.1 build_id={0xc2d9e5b6,0xb211e958,0xdef878e4,0xe4022df,0x9552253} Now I changed it to be the file pointer, and it does give a bit more (see the duplicates): sshd-session-1004 [007] ...1. 98.940058: inode_cache: hash=0x41a6191a path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} trace-cmd-1016 [006] ...1. 98.940089: inode_cache: hash=0xcc38a542 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} trace-cmd-1016 [006] ...1. 98.940109: inode_cache: hash=0xa89cdd4b path=/usr/local/bin/trace-cmd build_id={0x3f399e26,0xf9eb2d4d,0x475fa369,0xf5bb7eeb,0x6244ae85} trace-cmd-1016 [006] ...1. 98.940410: inode_cache: hash=0xb3c570ca path=/usr/local/lib64/libtracefs.so.1.8.2 build_id={0x6b040bdb,0x961f23d6,0xc1e1027e,0x7067c348,0xd069fa67} trace-cmd-1016 [006] ...1. 98.940460: inode_cache: hash=0x4da4af85 path=/usr/local/lib64/libtraceevent.so.1.8.4 build_id={0x8946b4eb,0xe3bf4ec5,0x11fd7d86,0xcd3105e2,0xe44a8d4d} trace-cmd-1016 [006] ...1. 98.940513: inode_cache: hash=0xce16bd9d path=/usr/lib/x86_64-linux-gnu/libzstd.so.1.5.7 build_id={0x379dc873,0x32bbdbc4,0x91eeb6cf,0xba549730,0xe2b96c55} bash-1007 [004] ...1. 98.941772: inode_cache: hash=0x772df671 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} bash-1007 [004] ...1. 98.941911: inode_cache: hash=0xdb764962 path=/usr/bin/bash build_id={0xd94aa36d,0x8e1f19c7,0xa4a69446,0x7338f602,0x20d66357} bash-1007 [004] ...1. 100.080299: inode_cache: hash=0xef3bf212 path=/usr/lib/x86_64-linux-gnu/libtinfo.so.6.5 build_id={0xff2193a5,0xb2ece2f1,0x1bcbd242,0xca302a0b,0xc155fd26} gmain-602 [003] ...1. 100.477235: inode_cache: hash=0xc9205658 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} trace-cmd-1017 [005] ...1. 101.412116: inode_cache: hash=0x5a77751e path=/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 build_id={0x4ed9e462,0xb302cd84,0x3ccf0104,0xbd80ac72,0x91c7fd44} trace-cmd-1017 [005] ...1. 101.417004: inode_cache: hash=0xf2e95689 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} trace-cmd-1017 [005] ...1. 101.418528: inode_cache: hash=0x5f35d3ca path=/usr/lib/x86_64-linux-gnu/libzstd.so.1.5.7 build_id={0x379dc873,0x32bbdbc4,0x91eeb6cf,0xba549730,0xe2b96c55} trace-cmd-1017 [005] ...1. 101.418572: inode_cache: hash=0x57feda78 path=/usr/lib/x86_64-linux-gnu/libz.so.1.3.1 build_id={0xc2d9e5b6,0xb211e958,0xdef878e4,0xe4022df,0x9552253} trace-cmd-1017 [005] ...1. 101.418620: inode_cache: hash=0x22ad5d84 path=/usr/local/lib64/libtraceevent.so.1.8.4 build_id={0x8946b4eb,0xe3bf4ec5,0x11fd7d86,0xcd3105e2,0xe44a8d4d} trace-cmd-1017 [005] ...1. 101.418666: inode_cache: hash=0x11c240a6 path=/usr/local/lib64/libtracefs.so.1.8.2 build_id={0x6b040bdb,0x961f23d6,0xc1e1027e,0x7067c348,0xd069fa67} trace-cmd-1017 [005] ...1. 101.418714: inode_cache: hash=0xf4e46cf path=/usr/local/bin/trace-cmd build_id={0x3f399e26,0xf9eb2d4d,0x475fa369,0xf5bb7eeb,0x6244ae85} wpa_supplicant-583 [000] ...1. 102.521195: inode_cache: hash=0xd20a587b path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} trace-cmd-1018 [005] ...1. 102.847910: inode_cache: hash=0xee16ee8e path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} sshd-session-1004 [000] ...1. 102.853561: inode_cache: hash=0x3404c7ea path=/usr/lib/openssh/sshd-session build_id={0x3b119855,0x5b15323e,0xe1ec337a,0xbd49f66e,0x78bddd0f} systemd-udevd-323 [007] ...1. 125.800839: inode_cache: hash=0x760273d5 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} systemd-journal-294 [000] ...1. 125.800932: inode_cache: hash=0x77f34056 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} systemd-udevd-323 [007] ...1. 125.801135: inode_cache: hash=0xe70bd063 path=/usr/lib/x86_64-linux-gnu/systemd/libsystemd-shared-257.so build_id={0x81d9bace,0x59f9953f,0x439928d7,0xe849d513,0xf2103286} systemd-1 [006] ...1. 125.801781: inode_cache: hash=0x42292844 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} systemd-1 [006] ...1. 125.802811: inode_cache: hash=0x2cac8b3b path=/usr/lib/x86_64-linux-gnu/systemd/libsystemd-core-257.so build_id={0x580a80c5,0x931714d2,0xec54d3be,0xd5400bc0,0x6f2530ba} systemd-1 [006] ...1. 125.803740: inode_cache: hash=0xb17acaa6 path=/usr/lib/x86_64-linux-gnu/systemd/libsystemd-shared-257.so build_id={0x81d9bace,0x59f9953f,0x439928d7,0xe849d513,0xf2103286} cron-541 [006] ...1. 138.192640: inode_cache: hash=0x9285db61 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} NetworkManager-581 [005] ...1. 144.716224: inode_cache: hash=0xf3c5bbc1 path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} NetworkManager-581 [005] ...1. 144.716392: inode_cache: hash=0x381883bb path=/usr/sbin/NetworkManager build_id={0x278c6dbb,0x4a1cdde6,0xa1a30a2c,0xbc417464,0x9dfaa28e} NetworkManager-581 [005] ...1. 146.385151: inode_cache: hash=0x43451e15 path=/usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.8400.0 build_id={0x9a7d3e29,0x5d8ed8f,0xe399da0,0xb5d373da,0x3ca1049b} chronyd-663 [001] ...1. 157.080405: inode_cache: hash=0xa0db647a path=/usr/lib/x86_64-linux-gnu/libc.so.6 build_id={0x10bddb6d,0xf5234181,0xc2f72e26,0x1aa4f797,0x6aa19eda} chronyd-663 [001] ...1. 158.152790: inode_cache: hash=0x1c471c4c path=/usr/sbin/chronyd build_id={0xf9588e62,0x3a8e6223,0x619fcb4f,0x12562bb,0x2ea104fb} But maybe it's not enough to be an issue. But this will become more predominate when sframes is built throughout. I only have a few applications having sframes enabled so not every task is getting a full stack trace, and hence, not all the files being touched is being displayed. Just to clarify my concern. I want the stack traces to be quick and small. I believe a 32 bit hash may be enough. And then have a side event that gets updated when new files appear that can display much more information. This side event may be slow which is why I don't want it to occur often. But I do want it to occur for all new files. -- Steve