Member Site Forums Rosetta 3 Rosetta 3 – Applications ddg_monomer.mpi

Viewing 2 reply threads
  • Author
    Posts
    • #1071
      Anonymous

        ddg_monomer doesn’t seem to work well with mpi. I would like it to send it a job with a large amount of mutation combinations and use the mpi enabled version to finish the job quickly.

        Consider the following protocol:
        mpirun -np 4 /rosetta3.3/rosetta_source/bin/ddg_monomer.mpi.linuxgccrelease @ddg.flag -in:file:s min_cst_0.5.input.pdb -constraints::cst_file ca.ca.dist.cst -ddg::mut_file example.mut -ddg::iterations 50

        #example.mut is a simple mutation, for instance a Leu to Ala mutation on res 1
        total 1
        1
        L 1 A

        As output I get a mut_L1A.out file containing 50*4 binary files in it, which I extract with the extract_pdbs protocol:
        mut_L1A_round_1.pdb
        mut_L1A_round_1_1.pdb
        mut_L1A_round_1_2.pdb
        mut_L1A_round_1_3.pdb

        mut_L1A_round_50_3.pdb

        However I’m unsure if these (the 4 structures for every iteration from the 4 mpi job requests) are independent trials. I’ve compared the pdbs and they are the same – but the ddg_monomer iterations always converge so far. (all iterations for .pdb _1.pdb _2.pdb and _3.pdb files).

        I suspect I should be able to find out from the random seed information in the log file… but I don’t know how to make sense of it since it seems like there is more than one random seed number per structures.

      • #6193
        Anonymous

          A) ddg_monomer is not compatible with MPI at all. You can distribute neither mutations within a mut_file nor separate input PDBs via MPI. I have no idea what it’s doing when you try to run it in MPI – probably recalculating 4 times and putting 4 identical results in the out file (that seems to be what you saw).

          B) Near the top of your log file, you should see for lines like so:

          core.init: Mini-Rosetta version 45510M from https://svn.rosettacommons.org/source/trunk/rosetta/rosetta_source
          core.init: command: /scratch/smlewis/UBQ_rosetta/rosetta_source/bin/UBQ_Gp_disulfide.linuxgccrelease @options
          core.init: Constant seed mode, seed=1111111 seed_offset=0 real_seed=1111111
          core.init.random: RandomGenerator:init: Normal mode, seed=1111111 RG_type=mt19937

          seed is your RNG seed, here 1111111 (the -constant_seed default). You can set this seed with -constant_seed -jran #######

        • #6196
          Anonymous

            Thank you very much :)

            Is InterfaceAnalyzer compatible with mpi? And how would I have known that ddg_monomer isn’t MPI compatible? If a protocol isn’t MPI compatible do you know of any way to parallelize it?

          • #6197
            Anonymous

              A) InterfaceAnalzyer runs under JD2 and is fully MPI compatible. I wrote much of it and will be happy to help you with it.

              B) Well, we don’t really have a good way to mark what executables are and are not MPI-compatible. Generally if the documentation mentions that it is, or that it it runs under JD2 (the job distributor, then it is MPI compatible. That said, [my understanding of] the way you want to run ddg_monomer is extremely nonstandard within Rosetta. Generally, if something is MPI compatible, it is only compatible by distributing input structures (from -s, -l, or silent files) in combination with -nstruct. It can’t distribute “within a job”. ddg_monomer’s work doesn’t break down along -s/-l/-silent/-nstruct lines, so it doesn’t distribute within the job distributor’s normal MPI model.

              C) For things that are not MPI compatible, there are a few answers. If it is not MPI compatible because it is old and uses JD1 instead of JD2, it just needs to be updated; everyone in the community wants this done and nobody wants to do it. If it is not MPI compatible because it uses no job distributor at all, then moving all the logic to a Mover’s apply() function is 90% of the work (then just put it under the job distributor). ddg_monomer is a funny case because you’d want to distribute parts of the mutlist. JD2 is sufficiently flexible that you could move the mutlist processing code into a JobInputter subclass, which would spawn normal Job objects compatible with the MPI parts of the job distributor, allowing for the sort of MPI distribution you want. That’s a significant amount of work (between three days and two weeks for an experienced developer, perhaps?) …I’m not suggesting you do it, but I thought I’d answer the question.

          Viewing 2 reply threads
          • You must be logged in to reply to this topic.