Member Site › Forums › Rosetta 3 › Rosetta 3 – Applications › Regarding the efficiency of the new low resolution ligand-docking movement “Transform”.
- This topic has 6 replies, 2 voices, and was last updated 7 years, 8 months ago by Anonymous.
-
AuthorPosts
-
-
May 15, 2017 at 4:19 pm #2655Anonymous
I was reading the article “Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand” (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0132508) in which the new low resolution mover “Transform” was implemented as part of the RosettaLigand protocol, replacing the old “Trans/Rot” Mover. One of the most impressive achievement was done regarding the efficiency of the protocol. In figure 3 (see link above) there are kernel density plots showing the distribution of time to produce a single trajectory, for 4 different protocols. The Transform/MIN protocol shows a normal distribution centered approximately around 10 seconds. This could allow for really fast exploration of binding poses, given that an average trajectory will only take 10s.
When I test the aforementioned protocol my system took in between 500 to 600 seconds to complete each trajectory (which is approximately a 60 times increase). Is it possible that some options are making my runs more slower than the ones reported?
My options file looks like:
-in:file:screening_job_file input/job_01.js
-parser:protocol input/transform_mcm.xml
-qsar:grid_dir grid
-ex1
-ex2
-restore_pre_talaris_2013_behavior true
-ignore_ligand_chi true
-out:file:silent output/results_01.out
-out:file:scorefile output/results_01.sc
The xml protocol file is the exactly the one in the directory:
demos/protocol_capture/2015/rosettaligand_transform/
The trajectory time is independent of the -ex1 and -ex2 flags. I repeated the test commenting them out and the results are the same.
-
May 15, 2017 at 4:46 pm #12342Anonymous
The first thing to know is that the red line in Figure 3 (the one that peaks around 10s) is actually the Transform/MIN protocol, rather than the Transform/MCM protocol. Adding the Monte Carlo steps to the full atom portion of the protocol changes the average runtime to more along the lines of 18-20s. That accounts for ~2 fold speed difference, but doesn’t explain most of what you’re seeing.
What computer are you running it on? If it’s a laptop or a relatively older desktop model, then you’ll expect that there will be some slowdown versus the reported timings. (Which are probably done on a higher-end desktop machine.) Do you have a slow hard drive, or are trying to access something like a remote disk? File IO might slow things down. You might want to try omitting the -qsar:grid_dir option, and see if disk access there is hurting you.
Also, what type of compliation are you running? If you’re using something like rosetta_scripts.linuxgccdebug versus rosetta_scripts.linuxgccrelease, that could account for a large amount of the slowdown, as the debug mode compilation adds a bunch of extra checks that can slow down runs.
You might also want to try is to add the option -analytic_etable_evaluation and see if that helps. With current versions of Rosetta and -restore_pre_talaris_2013_behavior true, Rosetta uses a lot of memory. If your computer doesn’t have enough memory, then it will swap to disk, which will *greatly* slow things down. The -analytic_etable_evaluation should greatly reduce the amount of memory used.
If none of that works, check is the tracer output. Are there any warnings or errors that might be giving you issues? Are there certain points where things slows down or “hangs”? Using Rosetta 3.8, I’m able to run the protocol capture docks in under 20s per output structure.
-
May 15, 2017 at 4:46 pm #12863Anonymous
The first thing to know is that the red line in Figure 3 (the one that peaks around 10s) is actually the Transform/MIN protocol, rather than the Transform/MCM protocol. Adding the Monte Carlo steps to the full atom portion of the protocol changes the average runtime to more along the lines of 18-20s. That accounts for ~2 fold speed difference, but doesn’t explain most of what you’re seeing.
What computer are you running it on? If it’s a laptop or a relatively older desktop model, then you’ll expect that there will be some slowdown versus the reported timings. (Which are probably done on a higher-end desktop machine.) Do you have a slow hard drive, or are trying to access something like a remote disk? File IO might slow things down. You might want to try omitting the -qsar:grid_dir option, and see if disk access there is hurting you.
Also, what type of compliation are you running? If you’re using something like rosetta_scripts.linuxgccdebug versus rosetta_scripts.linuxgccrelease, that could account for a large amount of the slowdown, as the debug mode compilation adds a bunch of extra checks that can slow down runs.
You might also want to try is to add the option -analytic_etable_evaluation and see if that helps. With current versions of Rosetta and -restore_pre_talaris_2013_behavior true, Rosetta uses a lot of memory. If your computer doesn’t have enough memory, then it will swap to disk, which will *greatly* slow things down. The -analytic_etable_evaluation should greatly reduce the amount of memory used.
If none of that works, check is the tracer output. Are there any warnings or errors that might be giving you issues? Are there certain points where things slows down or “hangs”? Using Rosetta 3.8, I’m able to run the protocol capture docks in under 20s per output structure.
-
May 15, 2017 at 4:46 pm #13384Anonymous
The first thing to know is that the red line in Figure 3 (the one that peaks around 10s) is actually the Transform/MIN protocol, rather than the Transform/MCM protocol. Adding the Monte Carlo steps to the full atom portion of the protocol changes the average runtime to more along the lines of 18-20s. That accounts for ~2 fold speed difference, but doesn’t explain most of what you’re seeing.
What computer are you running it on? If it’s a laptop or a relatively older desktop model, then you’ll expect that there will be some slowdown versus the reported timings. (Which are probably done on a higher-end desktop machine.) Do you have a slow hard drive, or are trying to access something like a remote disk? File IO might slow things down. You might want to try omitting the -qsar:grid_dir option, and see if disk access there is hurting you.
Also, what type of compliation are you running? If you’re using something like rosetta_scripts.linuxgccdebug versus rosetta_scripts.linuxgccrelease, that could account for a large amount of the slowdown, as the debug mode compilation adds a bunch of extra checks that can slow down runs.
You might also want to try is to add the option -analytic_etable_evaluation and see if that helps. With current versions of Rosetta and -restore_pre_talaris_2013_behavior true, Rosetta uses a lot of memory. If your computer doesn’t have enough memory, then it will swap to disk, which will *greatly* slow things down. The -analytic_etable_evaluation should greatly reduce the amount of memory used.
If none of that works, check is the tracer output. Are there any warnings or errors that might be giving you issues? Are there certain points where things slows down or “hangs”? Using Rosetta 3.8, I’m able to run the protocol capture docks in under 20s per output structure.
-
May 15, 2017 at 6:12 pm #12343Anonymous
Thanks for your comment, it was very helpful. I was having memory issues and it was swaping into the harddrive. Runing in a computer with more RAM (and probably other better hardware) solved the issue. Now I have a run of 24 seconds per trajectory.
Greetings
-
May 15, 2017 at 6:12 pm #12864Anonymous
Thanks for your comment, it was very helpful. I was having memory issues and it was swaping into the harddrive. Runing in a computer with more RAM (and probably other better hardware) solved the issue. Now I have a run of 24 seconds per trajectory.
Greetings
-
May 15, 2017 at 6:12 pm #13385Anonymous
Thanks for your comment, it was very helpful. I was having memory issues and it was swaping into the harddrive. Runing in a computer with more RAM (and probably other better hardware) solved the issue. Now I have a run of 24 seconds per trajectory.
Greetings
-
-
AuthorPosts
- You must be logged in to reply to this topic.