“Furthermore, I’m not sure why people want to parallelize the JobDistributor.”
I can’t address this at the python level. At the C++ level, it’s all about jumping through hoops. The supercomputers and clusters that most of the academic developers have access to REQUIRE parallel distribution of jobs. Thus, a parallelizing job distributor camouflages Rosetta to run on those systems. (At one point, one of our developers wrote code called the MPIlistwrapper which literally did this and nothing else: make a job pretend to be MPI when it wasn’t so we could use a specific cluster).
Even in cases where the sysadmins don’t require parallelization, parallelized Rosetta offers output benefits. Rosetta in parallel can organize output via communication between threads, so that you get one unified scorefile, one unified silent file, etc; instead of a series of independent directories whose contents must be pooled. Automating this is worth the trouble when multiplied by the many developers/users whose time is wasted in manually pooling their runs.
You are correct in that there is no speed or accuracy benefit to parallelizing the job distributor.