minirosetta: weird behavior

Member Site Forums Rosetta 3 Rosetta 3 – Applications minirosetta: weird behavior

Viewing 1 reply thread
  • Author
    • #1807

        This thread has been removed by the author.

      • #9699

          Your “-out:file:silent query_silent.out” flag, without any additional output file manipulation indicates that each Rosetta run will output its structures to the query_silent.out file in the current working directory (which should be the directory in which Rosetta was started. If you started all 100 runs in the same directory, you’ll get a single query_silent.out file in that directory, and all 100 runs will be put into that file. This will mostly work, but will occasionally fail if two processes try to write to the file at exactly the same time. This should be the same behavior that all of the previous versions had as well.

          The one difference you might be seeing is where you’re launching the job this time versus previous times. If all 100 jobs were previously launched in different directories (e.g. workingdir/1/, workingdir/2/, workingdir/3/ etc.) then you’d get 100 separate files (e.g. workingdir/1/query_silent.out, workingdir/2/query_silent.out, workingdir/3/query_silent.out etc.) but if you launched them all in the same directory, you’d just get one file (e.g workingdir/query_silent.out or just workingdir/1/query_silent.out with an empty workingdir/2/ directory).

          You can output multiple runs to multiple files in the same directory, but you’ll have to manually change the name of the file passed to -out:file:silent. Also, you can run from one directory by output to a different directory (e.g. a different directory for each run) by using the -out::path::all flag. (Finally, if you run the 100 runs under MPI instead of with completely separate processes, it will handle renaming and/or safely combining for you.)

          P.S. I don’t think the issue you’re running into with reading the file is with its size – instead its probably with two processes writing to the file at the same time, resulting in a garbled structure. Try adding the flag “-silent_read_through_errors” to just ignore the structures with errors.

        • #9700

            Dear Moretti. Thanks for your replay.
            I realized that was redirecting all silent to the same directory. A test with “nstrcut 10” gave me only one silent file with 1000 structures. I thought I was asking something stupid and removed the thread.
            I didn’t know about such behavior and found it very nice. This way I don’t have to rename all silent before merge. On the other hand, as you said it will occasionally fail.
            I’ll try rosetta mpi. I usually run the threading protocol in a script that copies the same directory 100 times and start minirosetta from inside each one. Will be necessary many modifications in my flags and scripts to run mpi??
            How do you usually run mpi?

          • #9701

              In order to run with MPI, you’ll need to have the MPI libraries and compiler set up on your system, and then (re)compile Rosetta for MPI by adding the “extras=mpi” flag to the scons commandline (MPI and non-MPI runs can exist side-by-side in the same source tree: just call application.mpi.linuxgccrelease for the MPI version and application.default.linuxgccrelease for the non-MPI version). To actually run with MPI, you’ll need to use the MPI launcher your system uses. (For example, “mpirun -n hostfile application.default.linuxgccrelease @options” instead of just “application.default.linuxgccrelease @options”)

              Each system for MPI is a little bit different, so the exact command and extra flags you’ll need, as well as how to set up the MPI compiler and library will depend heavily on which MPI system you’re using. If you have a local sysadmin which knows about MPI, discussing with them on how to work with MPI on your systems would be a good first step.

            • #9702

                I’m half way to go. Rosetta mpi is compiling smoothly.
                Thanks for helping. Best.

            Viewing 1 reply thread
            • You must be logged in to reply to this topic.