Member Site › Forums › Rosetta++ › Rosetta++ – General › weighted constraints
- This topic has 3 replies, 2 voices, and was last updated 10 years, 10 months ago by Anonymous.
-
AuthorPosts
-
-
February 3, 2014 at 11:24 pm #1820Anonymous
I’m running abinitio with AtomPair constraints and would like to assign a weight to each constraint somehow based on its reliability. I’ve been looking at the many constraint functions under rosetta-3.5/rosetta_source/src/core/scoring/constraints/ many of which are not documented anywhere online that I can see and also at this post
https://www.rosettacommons.org/node/3344
I get the feeling that this should be somehow possible with existing code. If there was a way to multiply the result of each constraint score by some scalar sort of like ConstantFunc in the above post, maybe? The SkipViolFunc also seems like it’s doing something similar, although I don’t quite understand what. Before I dig deeper and start messing with the code, perhaps there’s a relatively simple solution?
Thanks much
-
February 4, 2014 at 4:14 pm #9756Anonymous
First off, are you attempting to use the constraints with Rosetta3.5, or Rosetta++? (I ask because you’ve posted in the Rosetta++ section of the forums, and not everything that’s available under Rosetta3 will be available with Rosetta++.) For the rest of this, I’ll assume you’re working with Rosetta3.5.
The documentation for the constraints is at https://www.rosettacommons.org/manuals/archive/rosetta3.5_user_guide/de/d50/constraint_file.html It should be relatively complete (as these things go). If a constraint isn’t there, it’s likely that it’s not well tested, and might not be recommended for routine use.
To your question, constraints in Rosetta are made up of two parts. The measurement part and the scoring part. “AtomPair” refers strictly to the measurement part – it says that the quantity of interest (call it “x”) is the distance between the two specified atoms. You also have to specify a constraint function to turn this raw “x” value into a score. This would be things like “HARMONIC” or “BOUNDED” or “CONSTANTFUNC”. These specify the functional form through which that raw “x” is transformed. E.g. ((x-x0)/sd)^2 for harmonic. These – generally speaking – can be mixed and matched with any of the measurement types.
The simplest way of applying a constant scaling, then (assuming you’re using a harmonic), is to adjust the sd parameter for each function. A smaller sd means a more stringent restraint. An alternative for Rosetta3.5 would be to use the SCALARWEIGHTEDFUNC function. This does explicitly what you want, multiplying whatever existing function you have by a given weight. The way you use it is to prepend it to your existing function. For example, if you wanted to multiply by a factor of 3.61 the following constraint:
Angle CB 8 SG 8 ZN 32 HARMONIC 1.95 0.35
You would add your “SCALARWEIGHTEDFUNC 3.61” before the harmonic definition
Angle CB 8 SG 8 ZN 32 SCALARWEIGHTEDFUNC 3.61 HARMONIC 1.95 0.35
Alternatively you can adjust the sd value
Angle CB 8 SG 8 ZN 32 HARMONIC 1.95 0.1842
Where 3.61*((x-1.95)/0.35)^2 = ((x-1.95)/0.1842)^2
-
February 5, 2014 at 9:52 pm #9762Anonymous
Thank you, I haven’t actually tried it yet but it makes perfect sense and is really helpful.
One follow-up question I have is about the absolute value of the weight. Are the weight values only considered relative to each other or in the context of other scoring components? In other words given some harmonic parameters, if I multiply a constraint by 1000, is that going to “overpower” the other internal scoring components? Should I try to make the weights add up to 1.0?
Thanks much -
February 6, 2014 at 5:06 pm #9765Anonymous
The constraints are added to the other score terms, so they need to balance – increasing the constraints such that they give really large values will indeed overpower the other score terms, so you’ll get structures with near perfect constraint satisfaction, but horrible physical structure.
What happens with constraint calculations is that the constraint takes whatever measurement it makes (e.g atom pair distances in Angstroms, say 3), and then passes it through the associated function (e.g. ((x-x0)/sd)^2, so ((3-2./0.4)^2 = 0.25 ). The result is then multiplied by the appropriate scorefunction weight. (For AtomPair constraints this is the atom_pair_constraint term). What this weight is depends on the protocol and the scorefunction, and typically can be varied. So say you had a weight of 2.0 for atom_pair_constraint, that means you’d add 0.25*2 = 0.5 REU to the total score for your protein, alongside the other energy terms. (So a structure with a -257.8 REU energy without constraints would have a -257.3 REU energy with that single constraint above.)
Making weights add up to 1.0 is unnecessary – they aren’t fractional or proportional, they add directly to the total score. Whether it makes sense to multiply a constraint by 1000 depends on what you’re measuring and the functional form you use. It might totally be called for with a measurement & function that results in small values across the range of variability you’re likely to see. It wouldn’t be suggested if you have a measurement & function that has a large value change across the range you’re likely to see.
-
-
AuthorPosts
- You must be logged in to reply to this topic.