Parallel processing?

Member Site Forums PyRosetta PyRosetta – General Parallel processing?

Viewing 2 reply threads
  • Author
    Posts
    • #637
      Anonymous

        I’ve been trying to figure out how to implement the Job Distributor for parallel processing without much success. The PyRosetta manual states “… the script can be run multiple times for multiple processors all working on the same pool of decoys.” How is this accomplished? I have 2 quad-core processors on the machine in question. I’d love to have all eight cores working independently, but even if I can only do it on the two independent processors, that still cuts computation time in half.

        Any thoughts? Thanks,

        Mark

      • #4508
        Anonymous

          I was one of the authors on the underlying C++ job distributors, but I’ll freely admit I know nothing about the python layer. I can tell you this:

          A) The underlying C++ needs to be recompiled with extras=mpi to get MPI style parallel processing. I’m sure you don’t have this if you got binaries, and I’m not sure how to generate it for the python bindings.

          B) The job distributor you are likely to be using supports an option called -run::multiple_processes_writing_to_one_directory. In this mode, non-communicating Rosetta processes will use the filesystem to signal by making temporary files myjob_0001.in_progress, etc. If the number of processors is small and the jobs are long, then this method allows use of multiple processors on one big job in one directory with minimal overwriting. This method has NO GUARANTEES against overwriting, duplication of effort, or screwing up the scorefile with simultaneous writes. It should be sufficient for your use.

          C) You can always run 8 jobs in 8 different directories. Just make sure you start them with independent random number seeds, and you’ll get trajectories as effectively as one job eight times as long. This is functionally equivalent to (and the same speed as) MPI parallelization.

          If you have more questions about the underlying c++, you may want to repost on the rosetta3.0 board, I don’t check this one often.

        • #4659
          Anonymous

            There is a JobDistributor class written in Python that you can use to run multiple processes on the same directory. See the example in the tutorial book: http://graylab.jhu.edu/pyrosetta/downloads/documentation/Workshop7_PyRosetta_Docking.pdf

            • #4797
              Anonymous

                I have created jobdistributor using PyJObDistributor to create 100 trajectory as told in workshop part 4. But it seems that the job is not distributed acrosse several cores. When I use mulit core i modeller I am able to see that python process is using more than 100 %. I have enclosed the python script as an attachment.

                jd=PyJobDistributor(“polya”,100,scorefxn)
                jd.native_pose = starting_p

                #run
                while (jd.job_complete == False):
                p.assign(starting_p)
                p=protocol(p,”aat000_09_05.200_v1_3.polya”)
                jd.output_decoy(p)

              • #4799
                Anonymous

                  Running multiple processes is not the same distributing across multiple cores. What the previous posts are saying is for those people with multicore machines, simply run your PyRosetta script multiple times. This can be done quite easily using the background command (python myPyRosettaScript.py &) in a Linux terminal or just running the other job from another terminal window. This will run many serial jobs.

                  Furthermore, I’m not sure why people want to parallelize the JobDistributor. Docking simulations, or other methods that need to generate hundreds or thousands of configurations, are already “embarrassingly parallel”–runing one job on many cores is no different than running many jobs in many single cores. (Actually, Ahmdal says it would be faster to run many serial jobs.) If I need to generate 10,000 configurations, why don’t I just run 4 JobDistributors churning out 2500 structures each?

                  Please correct me if I’m wrong, but I believe PyRosetta is inherently serial because any secondary access to the Rosetta shared library through the PyRosetta interface will be blocked by the first Python thread that calls into the library. As a previous posters says, if Rosetta is compiled with MPI enabled (I have no experience with this), then yes, scoring function computation and other functions that are parallelized might benefit.

                • #4803
                  Anonymous

                    “Furthermore, I’m not sure why people want to parallelize the JobDistributor.”

                    I can’t address this at the python level. At the C++ level, it’s all about jumping through hoops. The supercomputers and clusters that most of the academic developers have access to REQUIRE parallel distribution of jobs. Thus, a parallelizing job distributor camouflages Rosetta to run on those systems. (At one point, one of our developers wrote code called the MPIlistwrapper which literally did this and nothing else: make a job pretend to be MPI when it wasn’t so we could use a specific cluster).

                    Even in cases where the sysadmins don’t require parallelization, parallelized Rosetta offers output benefits. Rosetta in parallel can organize output via communication between threads, so that you get one unified scorefile, one unified silent file, etc; instead of a series of independent directories whose contents must be pooled. Automating this is worth the trouble when multiplied by the many developers/users whose time is wasted in manually pooling their runs.

                    You are correct in that there is no speed or accuracy benefit to parallelizing the job distributor.

                  • #4811
                    Anonymous

                      Ah okay, that makes sense.

                Viewing 2 reply threads
                • You must be logged in to reply to this topic.