Rosetta currently doesn’t have an option for stochastic gradient descent; since it’s very rapid to compute the whole gradient vector, there was never any reason to do partial gradients. There are, however, several flavours, most of which use different approximations of the inverse of the second-derivative Hessian matrix. (True gradient descent using only gradients is implemented as the “linmin_iterated” minimization type, but this converges slowly and is recommended only for debugging. The default type is “lbfgs_armijo_nonmonotone”, which is a quasi-Newtonian gradient descent methods that uses the low-memory Broyden–Fletcher–Goldfarb–Shannon algorithm to approximate the inverse of the Hessian matrix. This converges more quickly).