Member Site › Forums › Rosetta 3 › Rosetta 3 – Build/Install › Building error › Reply To: Building error
“scons: done building targets” means you’re golden. score_jd2 is irrelevant, the order that the apps compile in is not important.
It’s almost impossible to have Rosetta compile, but not compile right. (that said, we had a nagging issue in the options parser for a year…)
Several testing suites are embedded in the release. None are really meant for end-user use but there’s no reason you can’t use them.
A) unit tests
The unit tests suite (most of the code in the test/) directory are, well, unit tests. Check Wikipedia. Our test coverage is quite poor, but these tests have KNOWN answers constant across all systems. Those answers are embedded in the tests. To run the unit tests, you’ll need to compile them and then run them. They are “supposed” to run in debug mode (30 mins to run tests):
scons (to compile the debug libraries)
scons cat=test (to compile the tests)
cd to rosetta_source
test/run.py -database (database) -mute all
It’s usually faster in release mode (a few mins to run tests):
(scons mode=release not necessary, since you already compiled)
scons cat=test mode=release
modify test/run.py to use release executeables; look for the system call to scons then add mode=release to that line
test/run.py -database (database) -mute all
Results should look like:
Unit test summary
Total number of tests: 698
number tests passed: 698
number tests failed: 0
Success rate: 100%
End of Unit test summary
Except that you’ll have fewer tests than that.
Integration tests
These tests run short Rosetta protocols (mostly too short to produce useful results, but long enough to put the code through its paces). Rosetta is unavoidably sensitive to numerical noise, so these trajectories are not guarunteed to match across different hardware (so we don’t distribute what it “should” look like). You can run them with:
cd test/integration
integration.py -j(num procs) -d database
It will report a “failure” in that it doesn’t have both a ref and new directory to compare, but you can look around in the ref directory to ensure that all the tests produce something that resembled useful results (usually a PDB or silent file). If you run the script twice, it should report perfect success (comparing new to ref on your machine). It should take about 30 seconds per test to run, but you can multithread it with -j.
C) Scientific tests
We also have scientific tests, which attempt to measure if Rosetta is doing things “right”. These test objective measures like sequence recovery in design, or RMSD in structure prediction. Go to test/scientific. I think the scientific.py script will run it. This takes hours and hours and hours.