- This topic has 3 replies, 2 voices, and was last updated 10 years, 2 months ago by Anonymous.
September 2, 2013 at 1:34 pm #1711Anonymous
I recently read a paper named “Generalized fragment picking in Rosetta: design, protocols and applications”, and I ran the picker by Rosetta3.4.
In this paper, the method of comparison two fragment pickers is in terms of the results of ab-initio structure prediction benchmark. I think it is exceedingly difficult to make accurate, general statements. Did you go through some papers to directly evaluate fragment library by another objective methods. Do you have anything recommended, and I need your guidance and help.
September 2, 2013 at 8:08 pm #9267Anonymous
The fragment picker as described in the paper you reference would be the current, recommended procedure for picking fragments. To my knowledge it’s the one everyone in the Baker lab and probably most others in the Rosetta community are now using, and the one that’s available though Robetta. I’m not aware of any reason why one would prefer to use the older methods (though I don’t actually work on anything that requires fragment picking myself).
That’s not to say that there isn’t some other (undiscovered) way that alternate methods might be preferable. One of the problems with benchmarking these sorts of things is that coming up with a good, objective benchmark is hard. Gront et al. used ab initio folding in part because that’s what they were primarily interested in, and proxy measures can sometimes give ambiguous/conflicting results.
If you’re doing something besides ab inito, you may want some other benchmarking method. What that would be would depend on what you’re interested in. Often this sort of benchmarking can be an (admittedly often low impact) paper in and of itself – especially if you can come up with a new and relevant way of looking at the problem. I would suggest looking at the papers of people who have done work similar to what you’re interested in. They may have come up with a benchmarking method for fragment picking that is of more relevance to you than ab initio.
If nothing like that is forthcoming, and you’re not interested in doing the benchmarking yourself, I’d probably recommend considering “good enough” good enough, and just use the recommended technique for ab initio, even if there are lingering questions regarding applicability to your particular application.
(If you can better summarize the ways in which the benchmarking of Gront et al. is sub-optimal, I could potentially pass it around to people more familiar with fragment picking, and see if they have any comments.)
September 3, 2013 at 4:20 am #9275Anonymous
Thanks for your timely response. I feel embarrassed that my expression was wrong,and I apologize for any inconvenience caused by it.. To my knowledge the fragment picker is the best and up-to-date one, and I don’t think the test benchmark is not objective. I’m just enquiring about other measurements in other papers to help evaluate fragment alibrary besides the the method of Comparing predicted results. Do you have any opinions which are of great help to me.
Thank you so much!
September 3, 2013 at 7:04 pm #9279Anonymous
Nothing you said was wrong, and there’s no need to apologize for it. It’s good to be questioning the rationale behind the methods you’re using.
Unfortunately, a good paper doing a review of different types of fragment picking doesn’t come to mind to the people I asked. One possibility is to look at papers which cite the Rosetta fragment picking papers, or other papers doing fragment picking. You can do this through Google Scholar (e.g. http://scholar.google.com/scholar?start=0&hl=en&as_sdt=0,48&sciodt=0,48&cites=10279189251136753858 once you find a paper on Google Scholar, simply click the “Cited by” link under the blurb) or you can also do it through the Web of Science http://thomsonreuters.com/web-of-science/ if your institution has access to it.
- You must be logged in to reply to this topic.