- This topic has 6 replies, 5 voices, and was last updated 10 years, 5 months ago by Anonymous.
March 30, 2009 at 11:21 am #317Anonymous
Rosetta3 is so elegant in terms of design and programming. But it seems too fresh that nobody begins to talk about it. I want to try its parallel performance just with a piece of information in help docs, saying:
-jdist_rerun -process::condor -process::process……
But I failed to test it. Does anyone clarify the usage?
April 3, 2009 at 5:50 pm #4060Anonymous
Which executeable are you trying to run? So far as I know the only parallel processing available in 3.0 is through MPI; there’s no multithreading that I’m aware of.
April 22, 2009 at 8:09 am #4073Anonymous
I’m novice in using rosetta3.
However, after having installed “scons”, I built it successfully and easily on my MacBook (OS X 10.5.6) using the command line :
> scons bin mode=release
After that, the abinitio mode works well, using the command line :
>AbinitioRelax.macosgccrelease @flags -database /Applications/rosetta3_database
As I’ve got two 2GHz Intel processors on my machine, I tried to compile rosetta3 with the following command line :
> scons -j2 gcc=mpi mode=release.
It used my two CPUs to compile the soft.
However, when I use the “AbinitioRelax.macosgccrelease” executable, with the same command line described before, the calculation still runs on 1 of my 2 CPUs.
Does anyone know the correct command line or flags to write to perform calculation on two processors, please ?
April 24, 2009 at 1:57 pm #4074Anonymous
‘scons bin mode=release’ compiles a single processor executeable. The easiest way to get it to run on two processors is to just start two jobs in separate folders – meaning, instead of running one job with nstruct 5000 in one folder, run two jobs with nstruct 2500 in separate folders. (Be sure to use different random number seeds!).
Rosetta3 does not support multithreading, which is what you would need to get two processors being used by one single parent call to Rosetta3.
Rosetta3 does support MPI. If you have MPI installed on your mac (no I don’t know how to do that), adding extras=mpi to the scons command line (scons bin mode=release extras=mpi) will turn on MPI. This will let you run multiple jobs on one machine that communicate with each other via MPI. If you’re talking about two processors this definitely is not worth your trouble.
July 5, 2013 at 7:11 pm #9000Anonymous
Regarding to this “instead of running one job with nstruct 5000 in one folder, run two jobs with nstruct 2500 in separate folders”, do you mean that it is basically the same if I run 1000 model each in 10 folder at the same time, is the same as if I install cluster MPI version? Are they about the same performance? Will the two ways influence the simulated model results?
May 10, 2009 at 1:01 am #4085Anonymous
Yes, “scons bin mode=release extras=mpi” can make execution code with supporting MPI. But how to run it, I mean the command options specification? It seems Rosetta3 changes a lot in command options from Rosetta 2.3.
I only find a piece of such information, such like:
-jdist_rerun -process: :condor -process: :process……
but it does not seem to work on my cluster machine, which well serves Rosetta 2.3 in MPI mode.
July 5, 2013 at 7:31 pm #9001Anonymous
For many Rosetta protocols the run for each output structure is completely independent of the runs for any other output structure, aside from the starting state of the random number generator. So as long as you start with a different random number seed for each command, from a scientific perspective it doesn’t matter if you have one process outputting 5000 structures, two processes outputting 2500 structures each, 10 outputting 500, or 500 outputting 10. It also doesn’t matter scientifically whether this is via independent runs, or via MPI distribution – the effect is basically the same.
Performance doesn’t *quite* scale linearly (there’s fixed setup costs, etc), but it likely will be close. So if one processor takes 10 days to output 5000 structures, 10 processors will take slightly more than 1 day to do 500 structures each. In general, feel free to split up the jobs among as many processors you have available to you.
The only caveat is to make sure the various runs all have different starting random number seeds. MPI will make sure the various nodes all have unique starting points, and for independent processes the default is to initialize the RNG from the system entropy source (so you shouldn’t worry about time-based collisions). Steven (smlewis) has had issues with automatic initialization and collisions in the past, though, so to be absolutely sure to avoid random number collisions, you can explicitly initialize the RNG by using the flag “-constant_seed”, and then passing different numbers for each process to the flag “-jran” (don’t forget the -jran, or everything will get the default seed.) Which number is actually used as a seed should be printed to the tracer at the beginning of the run.
- You must be logged in to reply to this topic.