- This topic has 4 replies, 2 voices, and was last updated 11 years, 9 months ago by Anonymous.
February 27, 2012 at 1:21 pm #1179Anonymous
when rescoring docked poses one obtains rather different scores. If I understand earlier posts correctly, the results from the ‘score.xxx’ application should be more suitable to distinguish the quality of docking results for different proteins than the docking score? (Please say yes … Or would be ‘relax.xxx’ a better choice? I am trying to compare docking runs of slightly different peptides of identical length to the same target.
Is there maybe a good overview for the different scores used in the various applications that I did not find yet? (Except for the table of energy contributions or the list of options from the online manual. An overview which application uses which scheme and how they are composed.)
Do the scoring schemes differ only in weights or also in e.g. applied normalizations?
February 27, 2012 at 2:37 pm #6706Anonymous
Score.xxx will score the existing structures; the only difference in structure from the docking output will be due to the imprecision of the PDB format. Relax.xxx will relax them (producing different structures than docking produced). You should use score. I don’t think that the direct docking score is wrong, however.
The docking scorefunction differs mostly in scorefunction weights, but a few terms are turned off (pro_close, p_aa_pp, fa_pair, fa_intra_rep) and one new one is activated (hack_elec, an explicit electrostatics term). The good overview of terms doesn’t really exist – everyone wants it written and nobody wants to write it.
You should also consider InterfaceAnalyzer, which will produce explicit binding energies for your peptides. (It will make the unfortunate assumption that the bodies are rigid when separated, which is spectacularly unlikely in the case of a peptide, but it’s better than just score).
February 27, 2012 at 3:43 pm #6709Anonymous
Thanks for the fast reply!
I very much agree that the docking score is not wrong!
However, I tried to reproduce the ‘subtle’ experimental information I had of one peptide not binding to the target at all, while one slightly modified showed a weak tendency to bind.
The absolute docking score did not reflect this, they were very similar, with the ones for the non-binder being even a bit smaller.
To exclude the internal energy of the peptides I tried to calculate the difference in score between the starting conformation and the final docked conformation for both peptides.
The scores from ‘score.xxx’ reflected the experimental information both as difference as well as absolute values.
I tried the InterfaceAnalyzer before.
For all docking poses or anchored designs it returned a packstat of 0, also after relax (see example below).
Would you recommend to rely on one of the dG values nonetheless?
SCORE: total_score complex_normalized dG_cross dG_cross/dSASAx100 dG_separated dG_separated/dSASAx100 dSASA_int delta_unsatHbonds hbond_E_fraction nres_all nres_int packstat per_residue_energy_int side1_normalized side1_score side2_normalized side2_score description
SCORE: 0.000 -1.828 -22.284 -1.280 -28.968 -1.665 1740.302 12.000 0.363 240.000 69.000 0.000 -1.414 -1.216 -249.227 -0.725 -25.371 anchor_design47-51_0003_0001_0001_0001
February 27, 2012 at 3:53 pm #6710Anonymous
A) I think packstat has to be activated with a command line flag for InterfaceAnalyzer – it’s slow so it’s off by default. Try -packstat or check the documentation (or tell me you can’t figure it out, and I’ll look into it)
Generally, dG_separated/dSASAx100 is the “best” comparative metric. dG_separated is a more reliable binding energy than dG_cross (the former separates the molecules in space and re-scores; the latter sums just cross-interface energy edges in the scoring graph; the latter miscalculates context-dependent score terms like the hydrogen bonding burial sensitivity). The dSASA division is to contr0l for interface size (so that large interfaces are not always better – it scales by energy per unit area), and the x100 is just to move the term into the right size range to fit in a scorefile (since it doesn’t report in scientific notation, the term is otherwise too small to fit into the available precision).
C) If you are unhappy with the results so far, you could try FlexPepDock – it’s intended for docking small flexible peptides?
February 28, 2012 at 10:56 am #6711Anonymous
- You must be logged in to reply to this topic.