ERROR: compiling rosetta3.1 with MPI

Member Site Forums Rosetta 3 Rosetta 3 – General ERROR: compiling rosetta3.1 with MPI

Viewing 6 reply threads
  • Author
    Posts
    • #648
      Anonymous

        Hi,

        I’m trying to compile rosetta3.1 from source with options:

        ./scons.py bin mode=release extras=mpi

        I had to add the last line in the following section of code in src/SConscript.src

        # Transform the modified settings into SCons Environment variables.
        # Gives priority to project settings over global settings.
        env = build.environment.Clone()
        env.Prepend(**actual.symbols())
        env.Append(CPPDEFINES=’-DMPICH_IGNORE_CXX_SEEK’)

        before adding this line I was getting a lot of errors
        “Because there is a name conflict between stdio.h and the MPI C++ binding involving SEEK_SET, SEEK_CUR, and SEEK_END, you must either include mpi.h before stdio.h and iostream.h in MPI programs written in C++, or add -DMPICH_IGNORE_CXX_SEEK to the compiler command line to force it to skip the MPI versions of the SEEK_* routines. “

        The errors for the SEEK* went away, but now in the linking stage I’m getting the following errors (just the top lines shown):

        mpiCC -o build/src/release/linux/2.6/64/x86/gcc/mpi/AbinitioRelax.linuxgccrelease -Wl,-rpath=/data/storage/software/rosetta3.1_MPI/rosetta_source/build/src/release/linux/2.6/64/x86/gcc/mpi build/src/release/linux/2.6/64/x86/gcc/mpi/apps/public/AbinitioRelax.o -Llib -Lexternal/lib -Lbuild/src/release/linux/2.6/64/x86/gcc/mpi -Lsrc -L/usr/local/lib -L/usr/lib -lprotocols -lcore -lnumeric -lutility -lObjexxFCL -lz
        /usr/bin/ld: skipping incompatible /usr/lib/libz.so when searching for -lz
        /usr/bin/ld: skipping incompatible /usr/lib/libz.a when searching for -lz
        /usr/bin/ld: skipping incompatible /usr/lib/librt.so when searching for -lrt
        /usr/bin/ld: skipping incompatible /usr/lib/librt.a when searching for -lrt
        /usr/bin/ld: skipping incompatible /usr/lib/libpthread.so when searching for -lpthread
        /usr/bin/ld: skipping incompatible /usr/lib/libpthread.a when searching for -lpthread
        /usr/bin/ld: skipping incompatible /usr/lib/libdl.so when searching for -ldl
        /usr/bin/ld: skipping incompatible /usr/lib/libdl.a when searching for -ldl
        /usr/bin/ld: skipping incompatible /usr/lib/libc.so when searching for -lc
        /usr/bin/ld: skipping incompatible /usr/lib/libc.a when searching for -lc
        build/src/release/linux/2.6/64/x86/gcc/mpi/apps/public/AbinitioRelax.o: In function `main’:
        AbinitioRelax.cc:(.text+0x12a): undefined reference to `std::cerr’
        AbinitioRelax.cc:(.text+0x12f): undefined reference to `std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*)’
        AbinitioRelax.cc:(.text+0x13a): undefined reference to `std::basic_ostream >& std::operator<< , std::allocator >(std::basic_ostream >&, std::basic_string, std::allocator > const&)’
        AbinitioRelax.cc:(.text+0x142): undefined reference to `std::basic_ostream >& std::endl >(std::basic_ostream >&)’
        AbinitioRelax.cc:(.text+0x1bc): undefined reference to `__cxa_begin_catch’

        Is this becasue it can’t find some libraries?
        or a result of me adding the MPICH_IGNORE_CXX_SEEK flag?
        or the MPI version I have is not compatible with rosetta 3.1 (I’m using /opt/intel/impi/3.2.0.011/bin64)?

        BTW I can build a non-MPI version successfully.

        Thanks in advance!

      • #4554
        Anonymous

          It looks like you’re missing the standard io libraries – this is not too surprising since it’s exactly what the error message said was conflicting with mpi.h.

          Ultimately I think the better solution is to install a version of MPI that does not have an mpi.h that conflicts with your stdio.h.

          Failing that – there are a vanishingly small number of files in Rosetta that use mpi.h. The original error suggests moving those headers to the top of those files. Why not try that?

        • #4555
          Anonymous

            I’m also reasonably certain the 3.1 stock abrelax is not MPI compatible. Do you need the MPI compatible version?

          • #4563
            Anonymous

              > Failing that – there are a vanishingly small number of files in Rosetta that use mpi.h. The original error suggests moving those headers to the top of those files. Why not try that?

              I tried that at first, and it worked for most of them, but I encountered that Matcher.cc has a series of entangled includes that made it impossible (at least for me) to have mpi.h at the very top

              >I’m also reasonably certain the 3.1 stock abrelax is not MPI compatible. Do you need the MPI compatible version?

              right now I have to manually distribute the work if I want to take advantage of my cluster, for example, if I want to generate 10,000 models, I manually send 10 different jobs to produce 1,000 models each, in different directories. I would like to know if there is a better way to just send one job for 10,000 models and automatically distribute the work on a user defined number of nodes/cores (I thought MPI-PBS was the way to do this)

              Thanks!

            • #4564
              Anonymous

                I don’t have any useful advice for getting the MPI build to work; this isn’t an error I’ve seen before. If you can’t change the MPI installation, we’ll have to tweak Rosetta.

                If you aren’t intending to USE the matcher (which is part of the enzyme design package), then let’s just try removing it wholesale from compilation. You can comment that whole folder out of protocols.src.settings and any client executeables out of apps.src.settings. I’ll cross my fingers for you…

                Anyway, even once you get this working, the 3.1 version of abinitio is not MPI compatible. Give me your email address and I’ll send along an MPIable version. It takes the same options, etc, as standard abinitio, so once you get an answer to your other SS question (I’ve punted it to someone who knows) that answer should still apply.

              • #4565
                Anonymous

                  my email is
                  trippm at gmail dot com

                  oh, and what about the loop modeling, dockings, will those also be MPIable?

                  Thank you very much for your help smlewis!

                  PS Just out of curiosity, is the manually distributed work load on a beowulf-type cluster the way people run rosetta in such a computing system (no automatic MPI-like structure)? what other types of automatic work distributing are there compatible with rosetta for such computing system?

                • #4570
                  Anonymous

                    In 3.1, MPI coverage is very spotty. Some applications use no job distributor, they have no MPI. Some use the “original” job distributor, some of those have MPI. Some use job distributor 2 (JD2), all of those have MPI.

                    I believe fixbb is the only “flagship” app using jd2 in the last release. (it’s been a year so I forget). We’re working on moving all the apps to jd2 for 3.2.

                    I’m reasonably certain both loop modeling and docking allow MPI. Neither work the same as Abinitio_MPI will that I’m sending to you, nor to each other, nor to fixbb. I think for loop modeling it’s all under the hood, just compile with extras=mpi and it will work. With docking you’ll need to pre-create a series of output directories for the result files to land in.

                    The development labs all run Rosetta in slightly different ways depending on the hardware they have available. The Kuhlman lab uses mostly MPI on a cluster held together by LSF. The Gray lab uses a not-too-huge cluster held by condor; they never use MPI but do use the -multiple_processes_in_one_directory flag – this is not precisely manual distribution but does use independent processes. The Baker lab has a larger condor cluster plus BOINC access to Rosetta@home. I don’t know what the other development labs use.

                Viewing 6 reply threads
                • You must be logged in to reply to this topic.