Member Site › Forums › Rosetta 3 › Rosetta 3 – General › ClusterApp hogs way too much memory…
- This topic has 5 replies, 3 voices, and was last updated 14 years, 11 months ago by Anonymous.
-
AuthorPosts
-
-
September 29, 2009 at 7:14 am #366Anonymous
Hi, all.
I’m trying to cluster some 50,000 de novo decoys, the initial sequence was about 60 aa long. But my lab’s cluster’s queueing app kills the job, and when I run it without the queuing app, it hogs the memory perceptibly.
Some facts:
(1) This happens when clustering both silent and pdb files.(2) Apparently, Rosetta 2.3. clustering worked fine on the cluster… but, according to a co-worker, I can’t cluster Rosetta 3.0 decoys with the Rosetta 2.3. application. (I am new to both, so I decided to learn 3.0 first.)
(3) The IT admin gave me this info on the cluster:
kernel: 2.6.16.28 SMP, i686, 32bit
gcc-4.1Is this a bug, or a behaviour to be expected?
-
September 29, 2009 at 1:53 pm #4202Anonymous
50,000 times 60 aa is pretty likely to swamp your memory. This is probably expected but not desired behavior – not a memory leak but not a good use of memory. If the old ++ version can handle that many poses then it’s probably optimized better.
I’m not aware of any reason why ++ clustering should fail against 3.0 structures. A PDB is a PDB is a PDB, so the old code should be able to read them in.
-
September 30, 2009 at 11:46 am #4203Anonymous
Thanks! The problem is, the memory is swamped even when I use the option which filters decoys by energy (meaning fewer of them should be clustered) – is there an option to cluster just a given fraction of the decoys? I couldn’t find it in the User Guide.
The old code works, but I’d prefer to avoid using it. I’m new to both, so I’d prefer to learn just 3.0.
-
September 30, 2009 at 1:49 pm #4205Anonymous
I’m not sure what you mean by cluster just a given fraction? The problem is likely to be just creating and storing the poses in memory, before the clustering begins (I’m not positive of this).
-
December 9, 2009 at 1:58 am #4271Anonymous
> Hi, all.
>
> I’m trying to cluster some 50,000 de novo decoys, the initial sequence was about 60 aa long. But my lab’s cluster’s queueing app kills the job, and when I run it without the queuing app, it hogs the memory perceptibly.
>
> Some facts:
> (1) This happens when clustering both silent and pdb files.
>
> (2) Apparently, Rosetta 2.3. clustering worked fine on the cluster… but, according to a co-worker, I can’t cluster Rosetta 3.0 decoys with the Rosetta 2.3. application. (I am new to both, so I decided to learn 3.0 first.)
>
> (3) The IT admin gave me this info on the cluster:
> kernel: 2.6.16.28 SMP, i686, 32bit
> gcc-4.1
>
> Is this a bug, or a behaviour to be expected? -
December 9, 2009 at 2:05 am #4272Anonymous
I had a similar memory problem while trying to cluster ~40K decoys using the 3.0 version of CLUSTER. Have you tried expanding your virtual memory. If you don’t mind letting your computer work for 2-3 days without using it for anything else then create a temp. swap file. I tried to increase the swap space to 20g but that still was not enough and it ran out about half way throwing the same alloc. error. It worked after increasing the swap space to 100g. This was on a linux box but I assume the same would be possible on a mac if you have space on your drive to spare.
-
-
AuthorPosts
- You must be logged in to reply to this topic.