On 11/4/2018 4:47 PM, Matt Jadud wrote:
On Sun, Nov 4, 2018 at 2:17 PM George Neuner <gneun...@comcast.net
<mailto:gneun...@comcast.net>> wrote:
Are you using in-process places or distributed places?
In-process places are just OS threads in the same process.
Distributed places can be launched in/as separate processes,
but then each process would have its own set of file descriptors.
Distributed places. However, even if they have their own descriptors,
they would be *my* descriptors. I think I'm limited to 1024 based on
the current machine configuration.
I'm not sure what you mean by "*my* descriptors".
Distributed places are separate *processes*. Since you have 4K file
descriptors per process, it is even harder to account for running out of
them.
What DBMS(es)? In-process DBMS like SQLite use file
descriptors, but client/server DBMS use network connections
(which don't count as open files).
I had that in a draft of this message; sorry for leaving it out. I
have a connection to the SQLite DB per distributed place. I *could*
convert all of this to use MySQL/Postgres, but I'm using SQLite
because the dataset I'm working with is across the ocean, and I'm
caching things in a way that lets me move between machines/clusters
easily without having to set up new services. I could possibly reduce
this to a pool of DB connections, and use a place channel to do all
the queries against the DB. This would cut my number of descriptors down.
That's only 1 descriptor per place, and if you really are running
separate processes then it's insignificant. Even as a single process
128 open files (for SQLite) would be only 3% of your limit.
IIRC, bytecode files are memory mapped, and that requires the file
be kept open. But even if every file is mapped into every place,
you'd need a lot of code files to exhaust 4K descriptors ... if it
is being done smartly [???], there would only be 1 descriptor
needed per file.
Unless it is 1024 descriptors, and if I have one .zo per distributed
place, and a DB, and... the question of whether library files count
against that .zo file descriptor count (I don't know)... that could
quickly exhaust things at the 128-distributed-place level. I have
multiple libraries included (sql, gregor, beautiful-racket-lib) and
several files are required in that I've written.
You misunderstand. If done smartly, 1024 mapped files * N places [in
the same process] would need only 1024 descriptors. But I don't know
if it is done smartly [it might not be], and since there also is JIT
compilation to consider it's possible that the .zo file isn't mapped at
all when JIT is enabled but merely read and compiled. Unfortunately I
don't know the internals.
One .zo per distributed place is just one descriptor per process. Again
insignificant because you have 4K per process.
It's apparent that the code is leaking descriptors somewhere. Forcing GC
may mitigate the problem, but it would be better if you found the leak.
George
--
You received this message because you are subscribed to the Google Groups "Racket
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.