Compilation of next with Chrono using gcc 6.3.0



I’m facing compilation issues of next, only when coupled with Chrono, using gcc 6.3.0.
The error does not occur with gcc 4.9.2. It is the following:

error: argument of type “const void *” is incompatible with parameter of type “const float *”
/usr/lib/gcc/x86_64-linux-gnu/6/include/avx512fintrin.h(9229): error: argument of type “const void *” is incompatible with parameter of type “const float *”
/usr/lib/gcc/x86_64-linux-gnu/6/include/avx512fintrin.h(9242): error: argument of type “const void *” is incompatible with parameter of type “const double *”
/usr/lib/gcc/x86_64-linux-gnu/6/include/avx512fintrin.h(9253): error: argument of type “const void *” is incompatible with parameter of type “const double *”
/usr/lib/gcc/x86_64-linux-gnu/6/include/avx512fintrin.h(9266): error: argument of type “const void *” is incompatible with parameter of type “const float *”

A very similar issue actually seems to be reported here:

I can manage to compile with gcc 6.3.0 anyway by setting -O0 instead of -O3 in the CXXFLAGS with gcc 6.3.0, but this requires a local modification of the Makefile, which would be better to avoid. We’ve tried also adding the flag -use_f16c=0 in the CXXFLAGS of GPUSPH too, as suggested as a solution in that github issue discussion, but with no effect.

I’ve even tried to add -use_f16c=0 in the Chrono CXX_FLAGS but with no effect either…

Do you have an idea what we should do? Is it possible to simply override the -O3 flag with –O0 using Makefile.local? Or should we modify the Makefile in next somehow?


Project Chrono: supported version


since Chrono makes extensive use of the vector instructions of the compiler, when using GPUSPH with it, it becomes particularly important to use a version of CUDA with full support for the compiler.

GCC 6.3 is only supported on CUDA 9 and later versions. Older versions do not support it. For example, the installation guide for version CUDA 8 indicates GCC 5.3.1 as the highest supported version. I’m guessing you are using version 8 or older on this machine.

There are two possibilities in this case:

  1. if you can upgrade CUDA and the newer versions still support your GPU architecture, this is probably the most robust solution;

  2. otherwise, if you are building chrono yourself, an alternative approach is to build it using a GCC version supported by your CUDA versions. To do this, you need to wipe out the Chrono build dir, recreate it and issue cmake after exporting the CC and CXX variables set to the correct version. For example

     export CC=gcc-4.9
     export CXX=g++-4.9
     cd build
     cmake ..

    (assuming the build directory is under the root chrono working directory).


Thank you very much for your answer Giuseppe, I’m sorry I had forgotten to mention that I am using CUDA 9.2 also for the compilation. So my issue is:

  • I cannot compile GPUSPH and Chrono with gcc 6.3.0 and CUDA 9.2
  • I can compile with gcc 4.9.2 and CUDA 9.2



Hello Agnès,

this is very strange, CUDA 9.2 supports gcc up to version 7.2.0 (unless somehow they dropped support for version 6.3 when they introduced support for version 7.2.0). What distribution are you using? Do you have multiple versions of CUDA installed? Can you paste the output of make show for GPUSPH?




Sure, the output of make show is:

[CONF] make showGPUSPH version: v4.1+1044-65a45b48
Platform: Linux
Architecture: x86_64
Current dir: /gpfsgaia/scratch/F41672/gpusph
This Makefile: /gpfsgaia/scratch/F41672/gpusph/Makefile
Used Makefiles: Makefile Makefile.conf Makefile.local dep/problems/CompleteSaExample.d
Linearization: yzx
Snapshot file: ./GPUSPH-v4.1+1044-65a45b48-2019-04-17.tgz
Last problem: CompleteSaExample
Sources dir: src src/adaptors src/cuda src/geometries src/integrators src/problems src/writers
Options dir: options
Objects dir: build build/adaptors build/cuda build/geometries build/integrators build/problems build/problems/user build/writers
Scripts dir: scripts
Docs dir: docs
Doxygen conf:
Debug: 0
CXX: g++
CXX version: g++ (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
MPICXX: /opt/openmpi/2.1.2-ifs10.7/bin/mpicxx
nvcc: /opt/nvidia-cuda-toolkit/9.2/bin/nvcc -ccbin=g++
nvcc version: 9.2
LINKER: /opt/nvidia-cuda-toolkit/9.2/bin/nvcc -ccbin=g++ --compiler-options -I/opt/openmpi/2.1.2-ifs10.7/include/openmpi,-I/opt/openmpi/2.1.2-ifs10.7/include/openmpi/opal/mca/event/libevent2022/libevent,-I/opt/openmpi/2.1.2-ifs10.7/include/openmpi/opal/mca/event/libevent2022/libevent/include,-I/opt/openmpi/2.1.2-ifs10.7/include,-pthread --linker-options -rpath --linker-options /opt/openmpi/2.1.2-ifs10.7/lib/x86_64-linux-gnu --linker-options --enable-new-dtags -L/opt/openmpi/2.1.2-ifs10.7/lib/x86_64-linux-gnu -lmpi_cxx -lmpi
Compute cap.: 70
Fastmath: 0
MPI version: 3.1 (OpenMPI 2.1.2)
default paths: /opt/nvidia-cuda-toolkit/9.2/include /opt/openmpi/2.1.2-ifs10.7/include /usr/include/c++/6 /usr/include/x86_64-linux-gnu/c++/6 /usr/include/c++/6/backward /usr/lib/gcc/x86_64-linux-gnu/6/include /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/6/include-fixed /usr/include/x86_64-linux-gnu /usr/include
INCPATH: -Isrc -Isrc/adaptors -Isrc/cuda -Isrc/geometries -Isrc/integrators -Isrc/problems -Isrc/writers -Isrc/problems -Isrc/problems/user -Ioptions -isystem /mnt/.tgvdv2/projets/projets.001/sph.109/Software/chrono/install-gaia/include -isystem /data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/include/ -isystem /data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/include//chrono -isystem /data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/include//chrono/collision/bullet
LIBPATH: -L/usr/local/lib -L/opt/nvidia-cuda-toolkit/9.2/lib64 -L/data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/lib/
LIBS: -lcudart -L/usr/lib/x86_64-linux-gnu/hdf5/serial -lhdf5 -lpthread -lrt -lChronoEngine
LDFLAGS: --linker-options -rpath,/data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/lib/ -L/usr/local/lib -L/opt/nvidia-cuda-toolkit/9.2/lib64 -L/data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/lib/ -arch=sm_70
CPPFLAGS: -Isrc -Isrc/adaptors -Isrc/cuda -Isrc/geometries -Isrc/integrators -Isrc/problems -Isrc/writers -Isrc/problems -Isrc/problems/user -Ioptions -isystem /mnt/.tgvdv2/projets/projets.001/sph.109/Software/chrono/install-gaia/include -isystem /data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/include/ -isystem /data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/include//chrono -isystem /data/rd/projets/projets.001/sph.109/Software/chrono/install-gaia/include//chrono/collision/bullet -D__STDC_CONSTANT_MACROS -D__STDC_LIMIT_MACROS -D_GLIBCXX_USE_C99_MATH -DUSE_HDF5=1 -I/usr/include/hdf5/serial -D__COMPUTE__=70
CXXFLAGS: -use_f16c=0 -m64 -std=c++11 -O3
CUFLAGS: -arch=sm_70 --generate-line-info -std=c++11 --compiler-options -use_f16c=0,-m64,-O3

The compilation works with -O0 instead of -O3, and the option -use_f16c=0 doesn’t change the behaviour it seems.



This is very strange, nvcc 9.2 should support g++ 6.3. I think you can ignore the use_f16c flag, it’s not something we handle in any way. Instead, can you try adding -march=native to the CXXFLAGS in your Makefile.local, as discussed in this GitHub issue? (While the issue is different, the cause is related.)


Hum, I just tried that and unfortunately I still get the same error during compilation. I even tried to undef CHRONO_HAS_AVX but that didn’t even solve the issue, which seems very strange.


This is extremely odd, and I’m running out of ideas.

Can you try a clean build without MPI? (make clean and then make mpi=0)


This is not linked to the use of mpi, I get the same mistake when I compile with mpi=0 instead of 1. I guess that for the moment I’ll try to get gcc 4.9.2 to be installed on that cluster.