Running multiple processes is not the same distributing across multiple cores. What the previous posts are saying is for those people with multicore machines, simply run your PyRosetta script multiple times. This can be done quite easily using the background command (python myPyRosettaScript.py &) in a Linux terminal or just running the other job from another terminal window. This will run many serial jobs.
Furthermore, I’m not sure why people want to parallelize the JobDistributor. Docking simulations, or other methods that need to generate hundreds or thousands of configurations, are already “embarrassingly parallel”–runing one job on many cores is no different than running many jobs in many single cores. (Actually, Ahmdal says it would be faster to run many serial jobs.) If I need to generate 10,000 configurations, why don’t I just run 4 JobDistributors churning out 2500 structures each?
Please correct me if I’m wrong, but I believe PyRosetta is inherently serial because any secondary access to the Rosetta shared library through the PyRosetta interface will be blocked by the first Python thread that calls into the library. As a previous posters says, if Rosetta is compiled with MPI enabled (I have no experience with this), then yes, scoring function computation and other functions that are parallelized might benefit.