ClusterApp hogs way too much memory…

Member Site Forums Rosetta 3 Rosetta 3 – General ClusterApp hogs way too much memory…

Viewing 5 reply threads
  • Author
    Posts
    • #366
      Anonymous

        Hi, all.

        I’m trying to cluster some 50,000 de novo decoys, the initial sequence was about 60 aa long. But my lab’s cluster’s queueing app kills the job, and when I run it without the queuing app, it hogs the memory perceptibly.

        Some facts:
        (1) This happens when clustering both silent and pdb files.

        (2) Apparently, Rosetta 2.3. clustering worked fine on the cluster… but, according to a co-worker, I can’t cluster Rosetta 3.0 decoys with the Rosetta 2.3. application. (I am new to both, so I decided to learn 3.0 first.)

        (3) The IT admin gave me this info on the cluster:
        kernel: 2.6.16.28 SMP, i686, 32bit
        gcc-4.1

        Is this a bug, or a behaviour to be expected?

      • #4202
        Anonymous

          50,000 times 60 aa is pretty likely to swamp your memory. This is probably expected but not desired behavior – not a memory leak but not a good use of memory. If the old ++ version can handle that many poses then it’s probably optimized better.

          I’m not aware of any reason why ++ clustering should fail against 3.0 structures. A PDB is a PDB is a PDB, so the old code should be able to read them in.

        • #4203
          Anonymous

            Thanks! The problem is, the memory is swamped even when I use the option which filters decoys by energy (meaning fewer of them should be clustered) – is there an option to cluster just a given fraction of the decoys? I couldn’t find it in the User Guide.

            The old code works, but I’d prefer to avoid using it. I’m new to both, so I’d prefer to learn just 3.0.

          • #4205
            Anonymous

              I’m not sure what you mean by cluster just a given fraction? The problem is likely to be just creating and storing the poses in memory, before the clustering begins (I’m not positive of this).

            • #4271
              Anonymous

                > Hi, all.
                >
                > I’m trying to cluster some 50,000 de novo decoys, the initial sequence was about 60 aa long. But my lab’s cluster’s queueing app kills the job, and when I run it without the queuing app, it hogs the memory perceptibly.
                >
                > Some facts:
                > (1) This happens when clustering both silent and pdb files.
                >
                > (2) Apparently, Rosetta 2.3. clustering worked fine on the cluster… but, according to a co-worker, I can’t cluster Rosetta 3.0 decoys with the Rosetta 2.3. application. (I am new to both, so I decided to learn 3.0 first.)
                >
                > (3) The IT admin gave me this info on the cluster:
                > kernel: 2.6.16.28 SMP, i686, 32bit
                > gcc-4.1
                >
                > Is this a bug, or a behaviour to be expected?

              • #4272
                Anonymous

                  I had a similar memory problem while trying to cluster ~40K decoys using the 3.0 version of CLUSTER. Have you tried expanding your virtual memory. If you don’t mind letting your computer work for 2-3 days without using it for anything else then create a temp. swap file. I tried to increase the swap space to 20g but that still was not enough and it ran out about half way throwing the same alloc. error. It worked after increasing the swap space to 100g. This was on a linux box but I assume the same would be possible on a mac if you have space on your drive to spare.

              Viewing 5 reply threads
              • You must be logged in to reply to this topic.