On Sun, Nov 4, 2018 at 2:17 PM George Neuner <gneun...@comcast.net> wrote:

Are you using in-process places or distributed places?   In-process places
> are just OS threads in the same process.  Distributed places can be
> launched in/as separate processes, but then each process would have its own
> set of file descriptors.
>

Distributed places. However, even if they have their own descriptors, they
would be *my* descriptors. I think I'm limited to 1024 based on the current
machine configuration.


What DBMS(es)?  In-process DBMS like SQLite use file descriptors, but
> client/server DBMS use network connections (which don't count as open
> files).
>

I had that in a draft of this message; sorry for leaving it out. I have a
connection to the SQLite DB per distributed place. I *could* convert all of
this to use MySQL/Postgres, but I'm using SQLite because the dataset I'm
working with is across the ocean, and I'm caching things in a way that lets
me move between machines/clusters easily without having to set up new
services. I could possibly reduce this to a pool of DB connections, and use
a place channel to do all the queries against the DB. This would cut my
number of descriptors down.


IIRC, bytecode files are memory mapped, and that requires the file be kept
open.  But even if every file is mapped into every place, you'd need a lot
of code files to exhaust 4K descriptors ... if it is being done smartly
[???], there would only be 1 descriptor needed per file.


Unless it is 1024 descriptors, and if I have one .zo per distributed place,
and a DB, and... the question of whether library files count against that
.zo file descriptor count (I don't know)... that could quickly exhaust
things at the 128-distributed-place level. I have multiple libraries
included (sql, gregor, beautiful-racket-lib) and several files are required
in that I've written.

Without a whole lot more information the only thought that occurs is to
have each place force a major GC after it finishes a work unit.  If
something is not being closed properly by the code, then it might be
cleaned up by the GC.


Which, given that the distributed places are doing time-intensive work,
this doesn't hurt me drastically.

I won't be able to investigate more until tomorrow or Tuesday, given my
schedule. I might start by asking for a bump in the file descriptor from
the sysadmins.

Many thanks,
Matt

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to