Member Site › Forums › Rosetta 3 › Rosetta 3 – Build/Install › Rosetta 3.8 GPU compilation
- This topic has 3 replies, 2 voices, and was last updated 7 years, 7 months ago by Anonymous.
-
AuthorPosts
-
-
May 23, 2017 at 1:01 pm #2658Anonymous
Hello,
I am having an error while compiling Rosetta with GPU options. I am using the following command to build Rosetta 3.8 with opencl GPU options:
scons mode=release extras=opencl bin
The log produced during the building of executables is as follows:
g++ -o build/src/release/linux/4.8/64/x86/gcc/5.4/opencl/basic/gpu/GPU.os -c -std=c++0x -ffor-scope -isystem external/boost_1_55_0/ -isystem external/ -isystem external/include/ -isystem external/dbio/ -pipe -Wall -Wextra -pedantic -Werror -Wno-long-long -Wno-strict-aliasing -march=core2 -mtune=generic -O3 -ffast-math -fno-finite-math-only -funroll-loops -finline-functions -finline-limit=20000 -s -Wno-unused-variable -Wno-unused-parameter -fPIC -DBOOST_ERROR_CODE_HEADER_ONLY -DBOOST_SYSTEM_NO_DEPRECATED -DBOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS -DPTR_STD -DNDEBUG -DUSEOPENCL -Isrc -Iexternal/include -Isrc/platform/linux/64/gcc/5.4 -Isrc/platform/linux/64/gcc -Isrc/platform/linux/64 -Isrc/platform/linux -Iexternal/boost_1_55_0 -Iexternal/libxml2/include -Iexternal -Iexternal/dbio -I/usr/include -I/usr/local/include -I/opt/AMDAPP/include -I/opt/AMDAPP/include/CL -I/opt/AMDAPP/lib/x86 -I/opt/AMDAPP/lib/x86_64 src/basic/gpu/GPU.cc
src/basic/gpu/GPU.cc: In member function 'int basic::gpu::GPU::Init()':
src/basic/gpu/GPU.cc:283:22: error: '_cl_command_queue* clCreateCommandQueue(cl_context, cl_device_id, cl_command_queue_properties, cl_int*)' is deprecated [-Werror=deprecated-declarations]
dev.commandQueue = clCreateCommandQueue(context_, dev.device, profiling_ ? CL_QUEUE_PROFILING_ENABLE : 0, NULL);
^
In file included from /usr/include/CL/opencl.h:42:0,
from src/basic/gpu/GPU.hh:29,
from src/basic/gpu/GPU.cc:27:
/usr/include/CL/cl.h:1359:1: note: declared here
clCreateCommandQueue(cl_context /* context */,
^
src/basic/gpu/GPU.cc:283:22: error: '_cl_command_queue* clCreateCommandQueue(cl_context, cl_device_id, cl_command_queue_properties, cl_int*)' is deprecated [-Werror=deprecated-declarations]
dev.commandQueue = clCreateCommandQueue(context_, dev.device, profiling_ ? CL_QUEUE_PROFILING_ENABLE : 0, NULL);
^
In file included from /usr/include/CL/opencl.h:42:0,
from src/basic/gpu/GPU.hh:29,
from src/basic/gpu/GPU.cc:27:
/usr/include/CL/cl.h:1359:1: note: declared here
clCreateCommandQueue(cl_context /* context */,
^
src/basic/gpu/GPU.cc:283:113: error: '_cl_command_queue* clCreateCommandQueue(cl_context, cl_device_id, cl_command_queue_properties, cl_int*)' is deprecated [-Werror=deprecated-declarations]
dev.commandQueue = clCreateCommandQueue(context_, dev.device, profiling_ ? CL_QUEUE_PROFILING_ENABLE : 0, NULL);
^
In file included from /usr/include/CL/opencl.h:42:0,
from src/basic/gpu/GPU.hh:29,
from src/basic/gpu/GPU.cc:27:
/usr/include/CL/cl.h:1359:1: note: declared here
clCreateCommandQueue(cl_context /* context */,
^
src/basic/gpu/GPU.cc: In member function 'int basic::gpu::GPU::ExecuteKernel(const char*, int, int, int, ...)':
src/basic/gpu/GPU.cc:687:26: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
arg.k_p = (void*)arg.i;
^
src/basic/gpu/GPU.cc:695:26: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
arg.k_p = (void*)arg.i;
^
cc1plus: all warnings being treated as errors
scons: *** [build/src/release/linux/4.8/64/x86/gcc/5.4/opencl/basic/gpu/GPU.os] Error 1
scons: building terminated because of errors.Does anyone know what this error means or (better) how can i solve it?
-
May 23, 2017 at 2:48 pm #12351Anonymous
Those are all warnings emitting from the GPU code itself, or even the system headers. Probably the simplest thing to do is turn off the warnings-as-errors option in your build settings. Here’s how you turn it off. https://www.rosettacommons.org/comment/7584#comment-7584
We use warnings-as-errors for the standard build, but the GPU code is probably not regularly tested to make sure it stays warning-free (and certainly isn’t tested against multiple compiler setups, all of which produce different warnings).
Don’t forget -j# when you compile for speed! (Although defininitely use -j1 when making nice log files for us!)
-
May 23, 2017 at 2:48 pm #12872Anonymous
Those are all warnings emitting from the GPU code itself, or even the system headers. Probably the simplest thing to do is turn off the warnings-as-errors option in your build settings. Here’s how you turn it off. https://www.rosettacommons.org/comment/7584#comment-7584
We use warnings-as-errors for the standard build, but the GPU code is probably not regularly tested to make sure it stays warning-free (and certainly isn’t tested against multiple compiler setups, all of which produce different warnings).
Don’t forget -j# when you compile for speed! (Although defininitely use -j1 when making nice log files for us!)
-
May 23, 2017 at 2:48 pm #13393Anonymous
Those are all warnings emitting from the GPU code itself, or even the system headers. Probably the simplest thing to do is turn off the warnings-as-errors option in your build settings. Here’s how you turn it off. https://www.rosettacommons.org/comment/7584#comment-7584
We use warnings-as-errors for the standard build, but the GPU code is probably not regularly tested to make sure it stays warning-free (and certainly isn’t tested against multiple compiler setups, all of which produce different warnings).
Don’t forget -j# when you compile for speed! (Although defininitely use -j1 when making nice log files for us!)
-
-
AuthorPosts
- You must be logged in to reply to this topic.