Viewing 1 reply thread
  • Author
    Posts
    • #2378
      Anonymous

        Hello people.

        it is normal for cluster.py occupy much space in the swap? I am following this tutorial, I had problems occupy the entire swap, and the program stops running, it increased to 54Gb, and still is using almost entirely.

        It is running for more than 22 hours, and apparently will take all of the memory again and crash the program.

        the script is:

        python /home/jrcf/rosetta/tools/protein_tools/scripts/clustering.py –silent=cluster_all.out –rosetta=/home/jrcf/rosetta/main/source/bin/cluster.linuxgccrelease –database=/home/jrcf/rosetta/main/database/ –options=cluster.options cluster_summary.txt cluster_histogram.txt

        *cluster.options

        -in:file:fullatom 

        -out:file:silent cluster_all.out

        -run:shuffle

        -cluster:radius -1 

        -cluster:input_score_filter 0

         

      • #11441
        Anonymous

          The old cluster application is known to perform miserably when handed a large number of structures.  It keeps them all in memory.  So, yes, this is normal.  You should just filter out the top few percent by energy and cluster those.

           

          You could also look into Calibur to do clustering instead?  I know Calibur was recently put into Rosetta but I don’t know when that will be released.

        • #11443
          Anonymous

            Thanks SMLEWIS

            I trying to use the Calibur now.

        Viewing 1 reply thread
        • You must be logged in to reply to this topic.