Hi,

I noticed the test suite for 4.0 sets an application environment variable 
{fabric, [{eunit_run, true}]} that results in erlfdb starting a new fdbserver 
instead of using the one defined in the fdb.cluster file that it would use 
otherwise. I think that’s a reasonable default especially given that many of 
these tests seem to erase the entire FDB key space. I also noticed that there’s 
an additional {erlfdb, [{test_cluster_file, “…”}]} environment setting that can 
be used to override the behavior of starting a new fdbserver.

As I mentioned in a recent thread I’ve been hacking on a development container 
for VS Code / GitHub Codespaces to make it easier to get started on CouchDB 
development. For 4.0 I’m thinking to use Docker Compose and the official FDB 
image that the community posts to Docker Hub. In this setup I think it would be 
better to just use the `test_cluster_file` setting to point to the same 
fdbserver running in that container for unit tests rather than start up 
_another_ fdbserver inside the CouchDB container, as this whole setup is 
explicitly designed for development and would not be confused with an FDB 
server being used for some other purpose. If that makes sense to folks, I think 
 my next steps would be to find a clean way to inject this `test_cluster_file` 
setting into the unit test config when running in the devcontainer.

On a related note, I wonder if some of those tests could be refactored to e.g. 
create an empty directory for the test and remove it afterwards, rather than 
blowing away a whole cluster’s worth of data each time. Perhaps using a parent 
“eunit_test” directory so it’d be easy to clean up if necessary? The thinking 
would be that a developer could have some data in a test cluster, run the test 
suite against the same FDB powering that cluster, and not lose the data. I know 
there are downsides to that approach and we’d want a big red button to reset 
the FDB cluster, but wanted to float it anyway. Cheers,

Adam

Reply via email to