ddg_monomer: what is the meaning of “total score” and “score”?

Member Site Forums Rosetta 3 Rosetta 3 – Applications ddg_monomer: what is the meaning of “total score” and “score”?

Viewing 2 reply threads
  • Author
    • #2119

        Dear friends,
        After using a PDB of -740 REU to run ddg_monomer, I use score_jd2.linuxgccrelease to score the silent output. In the score file (attached), the first two elements are “total_score” and “score”. The scale of “total_score” is close to the talaris2013, i.e. -700 to -730; the scale of “score” is -1000, much lower than -700.

        Can I ask
        1) What is the difference of “total score” and “score”?

        2) Why cannot we just use one score?

        3) In the log file (attach) generated during ddg_monomer, it is the “score” instead of “total_score” that is used, why is that?

        4) The ddg prediction for this mutant is

        ddG: description total fa_atr fa_rep fa_sol fa_intra_rep fa_elec pro_close hbond_sr_bb hbond_lr_bb hbond_bb_sc hbond_sc dslf_fa13 rama omega fa_dun p_aa_pp ref
        ddG: D1A -0.362 1.253 1.043 -1.920 0.022 -0.284 0.085 -0.062 -0.330 0.015 0.912 -0.042 -0.117 -0.349 -2.538 0.071 1.880

        So the first score element “total”, i.e. -0.362, means “total_score” or “score”?

        Thank you very much.

        Yours sincerely

      • #10807

          Can someone help me with the difference of “total score” and “score”? Thank you.

        • #10825

            By default, Rosetta tends to bring forward “non-standard” scoreterms from input files. This is helpful in long protocols, where you annotate files with information about various stages. The “total_score” output should be the score that was calculated by the score_jd2 application. The “score” term is likely the score which is being carried forward from the previous stages – compare it to the “score” term in the input files for the rescoring. The numeric difference between the two is likely reflecting the difference in the scoring setting used in the two stages with/without constraints, different scoring settings, different score functions, even additional protocol specific terms which have been added. Take a closer look at the scores reported with the input files to tell more.

            “score” “total” and “total_score” all mean basically the same thing – which one is used depends on which protocol is being run.

          • #10848

              Hi R Moretti,
              Thank you for your answer but I am still quite confused about this. For example, what is “standard” and “non-standard” scoreterms? As “total_score” and “score” generate different score values, why you say they are same?

              I think I may make the question complicated. The main issue I want to know is: what is the scorefunction used for the term “total” in the “ddg_predictions.out”?

              I also find that the algorithm to calculate the “total” seems to be NOT as it is said:

              “the most accurate ddG is taken as the difference between the mean of the top-3-scoring wild type structures and the top-3-scoring point-mutant structures”, for which I have posted on

              Thank you very much.

              Yours sincerely

            • #10958

                “Standard” versus “non-standard” is basically if the score term is something that can be specified in a scoring weights file – terms which can be part of a weights file are standard terms, other metrics reported as “scores” would be the non-standard ones.

                The score vs. total_score difference is that the scores are being calculated at different stages of the protocol. They’re both the whole-structure score for whatever stage of the protocol they’re calculated in, but the details of the scoring differ between the stages, so the score and total_score terms are different, reflecting that difference.

                In the ddg_predictions.out output file, the “total” is the predicted ddG – the total of the individual scoreterm difference of the mutation and wildtype. How that’s calculated depends on your options. If you have -ddg:mean set, then it looks to be computed as the average of (up to) the best scoring 20 replicates of the mutant minus the average of (up to) the best scoring 20 replicates of the wild type. If you have -ddg::mean off and instead have -ddg::min set, then it’s calculated as the minimum energy of the mutant minus the minimum energy of the wild type. If you have neither set, it likely (but doesn’t necessarily) default to being the same as specifying min along with a printed error message. – I’m not sure where the value of three came from.

            Viewing 2 reply threads
            • You must be logged in to reply to this topic.