On Wed, 25 Feb 2026 09:57:06 GMT, Thomas Stuefe <[email protected]> wrote:

>> Hi @kevinjwalls, here is a snippit from the current position under 
>> "PROCESS". I added it here since the count is process related but I agree it 
>> might be nice to have it closer to the NOFILE count int the "OS" section 
>> under "SYSTEM".
>> 
>> 
>> ---------------  P R O C E S S  ---------------
>> 
>> Heap address: 0x0000000320000000, size: 9216 MB, Compressed Oops mode: Zero 
>> based, Oop shift amount: 3
>> 
>> CDS archive(s) not mapped
>> Compressed class space mapped at: 0x00000fc000000000-0x00000fc040000000, 
>> reserved size: 1073741824
>> UseCompressedClassPointers 1, UseCompactObjectHeaders 0
>> Narrow klass pointer bits 32, Max shift 3
>> Narrow klass base: 0x00000fc000000000, Narrow klass shift: 0
>> Encoding Range: [0x00000fc000000000 - 0x00000fc100000000), (4294967296 bytes)
>> Klass Range:    [0x00000fc000000000 - 0x00000fc040000000), (1073741824 bytes)
>> Klass ID Range:  [8 - 1073741817) (1073741809)
>> Protection zone: [0x00000fc000000000 - 0x00000fc000010000), (65536 bytes)
>> 
>> OpenFileDescriptorCount = 5
>> 
>> GC Precious Log:
>>  CardTable entry size: 512
>>  Card Set container configuration: InlinePtr #cards 4 size 8 Array Of Cards 
>> #cards 64 size 144 Howl #buckets 8 coarsen threshold 14745 Howl Bitmap 
>> #cards 2048 size 272 coarsen threshold 1843 Card regions per heap region 1 
>> cards per card region 16384
>>  CPUs: 11 total, 11 available
>>  Memory: 36864M
>>  Large Page Support: Disabled
>>  NUMA Support: Disabled
>>  Compressed Oops: Enabled (Zero based)
>>  Heap Region Size: 8M
>>  Heap Min Capacity: 8M
>>  Heap Initial Capacity: 8M
>>  Heap Max Capacity: 9G
>>  Pre-touch: Disabled
>>  Parallel Workers: 9
>>  Concurrent Workers: 2
>>  Concurrent Refinement Workers: 9
>>  Periodic GC: Disabled
>> 
>> Heap:
>>  garbage-first heap   total reserved 9437184K, committed 8192K, used 4545K 
>> [0x0000000320000000, 0x0000000560000000)
>>   region size 8M, 1 eden (8M), 0 survivor (0M), 0 old (0M), 0 humongous 
>> (0M), 0 free (0M)
>
> Hi @kieran-farrell 
> 
> I have thought some more about this and think this would be much more useful 
> if we were to actually print information about the open file descriptors. It 
> would be perfectly fine to just print for the lowest n file descriptors (say, 
> the lowest 100), since 99.9% of all JVM runs only open a couple of them.
> 
> Something like this:
> 
> 
> Open File Descriptors:
> 0 (fifo)
> 1 (fifo)
> 2 (fifo)
> 7 (reg) /my/path/to/java
> 22 (socket)
> 99 (reg) /another/file
> 
> (1232 more skipped. Total count: 1238).
> 
> 
> That would give us very nice synergies, e.g. when looking for fd leaks: we 
> have a switch to print out a VM.info report when the JVM shuts down, to 
> stdout. So now I could see how many and which file descriptors I accumulated 
> in a JVM over the course of, say, a test run. As it is now, I either have to 
> use `lsof`, which is annoying to deal with when the process is short-lived, 
> or I have to do my own ad hoc printout.
> 
> I think the printout could be very simple: just basically fstat(), then print 
> the type (regular file, fifo, socket etc); then, for regular files or 
> directories, do a readlink() and print out the file name. This is what I 
> usually use for ad-hoc printouts when analyzing fd leaks. 
> 
> And as I said, it would be completely fine to do this for the first 100 file 
> descriptors (or any below "100") only; then maybe print dot dot dot and the 
> final count.
> 
> As a side effect, the "os::print_file_descriptors()" would now be correctly 
> named. As it is now, it does not print file descriptors, but the file 
> descriptor count.
> 
> ---
> 
> Wrt the stack-usage issue on MacOS: Please don't build up large stack usage 
> in error handling. As I have said, with recursive secondary signal handling, 
> that is not a good idea since we may run with not much stack left.
> 
> Instead, do the following:
> - when called from outside error handling (e.g. jcmd VM.info), you can use 
> the in-function array
> - when used during error handling, use a statically allocated array. Error 
> handling always happens single-threaded.
> 
> Best way to do that:
> - provide two versions of your function. One that takes a pointer to a 
> scratch buffer as storage on input, one that does not. For the one that does 
> not, use an in-function array. Use the one with the scratch buffer as input 
> from VMError::report(), and fill in the VMError scratch buffer. There are 
> enough examples for this (search for O_BUFLEN or somesuch in VMError, you'll 
> see what I mean).

@tstuefe thanks you for the comments.

I have added two commits, the first creates a second 
`print_open_file_descriptors` impl which is used during signal handling context 
for mac only, taking in a pointer to the scratch buffer as suggested (I wasnt 
aware of this buffer).

The second allows the other impl (that used by V`M.info`) to call malloc and 
print the actual number of open FDs, rather than limiting it to a max number 
dictated by a pre allocated stack (as this was only added with signal handling 
in mind anyway when the method was shared).

Printing info about the FD types is a fine idea, I'll follow up on this.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/27971#issuecomment-4000029201

Reply via email to