cmake_minimum_required (VERSION 3.8.2)project (Test C CXX Fortran)find_package(MPI)
When using Intel 19 compilers, find_package(MPI) produces:
-- Could NOT find MPI_C (missing: MPI_C_WORKS) -- Could NOT find MPI_CXX (missing: MPI_CXX_WORKS) -- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND) (found version "3.1")
This happens only when using cmake>=3.10. With cmake<=3.9, MPI C/CXX are found.
Another thing I noticed is this happens when the OS is Ubuntu 17.10 or 18.04.
I do not see this problem on RedHat 6, 7 or on Debian 9.
Not sure if it matters, but I also tried with cmake_minimum_required (VERSION 3.11)
Manually passing MPI_C_LIBRARIES or MPI_CXX_LIBRARIES did not help either.
my cmake command: cmake .. -DCMAKE_C_COMPILER=icc -DCMAKE_CXX_COMPILER=icpc -DCMAKE_Fortran_COMPILER=ifort
Output of icc -v
icc version 19.0.0.117 (gcc version 8.2.0 compatibility)
gcc used by Intel compiler driver is installed either manually or using Spack on the different OS tested. When using the default gcc provided by Ubuntu, this problem of not finding MPI C/CXX does not show up.
Edited
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
I cannot reproduce this issue, it "works for me" with CMake 3.13.2 on Ubuntu 18.04. I didn't try other versions.
CmakeLists.txt
cmake_minimum_required(VERSION 3.0)project(Test C CXX Fortran)find_package(MPI)
$ CC=icc CXX=icpc FC=ifort cmake ..
-- The C compiler identification is Intel 19.0.0.20181018-- The CXX compiler identification is Intel 19.0.0.20181018-- The Fortran compiler identification is Intel 19.0.0.20181018...-- Found MPI_C: ~/intel/compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib/libmpifort.so (found version "3.1") -- Found MPI_CXX: ~/intel/compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib/libmpicxx.so (found version "3.1") -- Found MPI_Fortran: ~/intel/compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib/libmpifort.so (found version "3.1") -- Found MPI: TRUE (found version "3.1") -- Configuring done
Thanks for looking into this! icc -v shows the gcc version compatibility. In your case, I am assuming gcc used by Intel compiler driver was installed via apt-get ? If so, the problem will not show up. To reproduce the issue, gcc has to be installed either manually or via Spack and icc -v should point to that and not the default gcc in Ubuntu 18.04.
I can reproduce this with Intel 17 on CentOS 7, with the latest (3.14.1) CMake. Reverting to CMake 3.9.6 in the same environment allowed the build to be successful.
Is this still a current bug?
I have CentOS 7.6, the Redhat devtoolset-7 kit installed, cmake v3.11.4 with the Intel v19 compiler and I get the same issue find_package(MPI)
-- Could NOT find MPI_CXX (missing: MPI_CXX_WORKS)
CMake Error at /usr/share/cmake-3.11/Modules/FindPackageHandleStandardArgs.cmake:137 (message):
Could NOT find MPI (missing: MPI_CXX_FOUND) (found version "3.1")
Call Stack (most recent call first):
/usr/share/cmake-3.11/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.11/Modules/FindMPI.cmake:1663 (find_package_handle_standard_args)
cmake/modules/FindMPICFS.cmake:15 (find_package)
CMakeLists.txt:201 (include)
I've never been able to reproduce this issue, so I imagine it's still there. CMake 3.15 will emit error logs if MPI fails to build (i.e. you get this MPI_<LANG>_WORKS issue). If anyone can reproduce the issue with that and pass along the CMakeErrors file, we can possibly triage this.
I'm a bit curious though, I've just tried with CMake 3.13.4, RHEL 7.6, devtoolset-8 and Intel 19u3 without running into any issues.
@ChrisTX I did not see this issue on RHEL 6 or 7. Only on Ubuntu with a user installed gcc. Am using Intel 19u3, gcc-8.3 on Ubuntu 18.04. Attached is the CMakeError.log
@ajaypanyala Thanks for the error log. I can't really say much from it, other than it's not directly an MPI problem. The MPI settings and command line have been parsed correctly and the reason you get the MPI_<LANG>_WORKS errors is due to the compiler failing to locate some glibc headers. These are included with #include instead of #include_next, so the order of the include directories shouldn't matter, either.
The difference between 3.9.x and 3.10.x is that since CMake 3.10, the MPI functionality is verified with a try compile against the imported MPI target. If that fails, as a sanity check (and in order to prevent MPI implementations for other compilers from being loaded), you get this MPI_<LANG>_WORKS error. Most likely this is just a case of 3.10 detecting a problem in the environment, 3.9 did not due to a lack of this verification.
For this matter, does compiling MPI example binaries with mpiicc work in your environment?
The only other lead I'd have right now would be the difference between -I and -isystem (due to the NO_SYSTEM_FROM_IMPORTED) but that should not matter or make any difference for the functionality check.
In file included from hello.cpp(1):/usr/include/stdio.h(27): catastrophic error: cannot open source file "bits/libc-header-start.h" #include <bits/libc-header-start.h>
I had to manually append the location of the above header to CPATH to make the test program compile. Once CPATH is set, MPI_<LANG>_WORKS error does not show up and Intel MPI libs are found. Thanks!
@ajaypanyala So mpiicc is broken for you? Is the regular icc broken as well? icc parses the include paths it's going to use from the gcc loaded in the environment, so I imagine the issue is a symptom of a more generic bug. Either way, this sounds like something that should be fixed in Spack, so if you can triage the issue, please report this to them. However, if the MPI compiler itself does not work, CMake can't possibly find working MPI settings - the fact that CMake 3.9 claimed MPI was found was a bug.
@john.verdicchio1@tanderson92 Can either of you try again with CMake 3.15-rc1 and provide me your CMakeError.log? I assume your issues are going to be different from this one.
I'm away from my computer for 2 days but will get back to you ASAP when I
return.
I do note that mpiicpc/mpiicc seem to miss out libfabric.so in the
linker. I have to manually force this library to be included , I thought
that was the point of the mpi wrappers.
mpiicpc/mpiicc seem to miss out libfabric.so in the linker
You shouldn't need it for shared library builds since it's a private dependency of MPI. It should only need to be added to the link line with building with a static MPI.
That's what I thought but it's happening. I have a default install of the
Intel compiler. I've used the Redhat developer toolset giving me gcc7.3.
With the Intel
/opt/Intel/bin/compilervars.csh
set and the
scl enable devtoolset-7
set I get this linker issue.
@ChrisTX It's been a while since I last used the Intel toolchain, but yes, I remembered only yesterday that even with cmake-3.9, I still had to set CPATH once the cmake configure step passes and I run make (i.e. icc won't compile anything without adjusting CPATH). I did not realize this was connected to MPI_<LANG>_WORKS error until you mentioned that cmake >= 3.10 is using try compile to verify MPI functionality. It makes more sense to me now. Really glad it's not a CMake>=3.10 bug. In general, icc does not work with spack installed gcc unless /usr/include/x86_64-linux-gnu (i.e. system default headers) is appended to CPATH (atleast on Ubuntu). I will report this to spack team. @john.verdicchio1@tanderson92 could you please try appending header paths to CPATH - wondering if it's the same issue on your end.
You shouldn't need it for shared library builds since it's a private dependency of MPI. It should only need to be added to the link line with building with a static MPI.
It still needs to resolve, i.e. the library path has to contain it. The Intel compilers have been shipping their own libfabric since Intel 2019 and as of Update 1, it has been made the default. The location of the library is however different from the normal Intel MPI libraries. If there was something wrong with the settings, this could cause problems if no system wide libfabric is available.
If it's really related to libfabric, it would help very much if you could also provide the exact Intel 19 Update you're using (they've been switching around the included libfabric wildly with the subsequent updates for 19), what the output of mpiicc -show is and if/which libfabric-devel package is installed.
Back to my proper computer...
mpiicc -show
icc
-I/opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/include
-L/opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/lib/release
-L/opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/lib
-Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker
/opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/lib/release
-Xlinker -rpath -Xlinker
/opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/lib
-lmpifort -lmpi -ldl -lrt -lpthread
I've found a src.tgz ,
here /opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/libfabric that
contains the src to build libfabric. It's builds a version of
libfabric.so.1.10.1
@john.verdicchio1 Right, after loading compilervars.sh or psxevars.sh, the environment variable LIBRARY_PATH should contain /opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/libfabric/lib. In that folder, there is a pre-built libfabric.so. Note that you can change the behavior here by setting I_MPI_OFI_LIBRARY_INTERNAL to disable that directory ending up in there. In that case, Intel requires a global libfabric package to be installed for linking.
Either way, as I've said before, if mpiicc can't compile a test MPI application, then there's something wrong in your environment and CMake cannot mitigate that. If that isn't the case, can you provide the contents on your LIBRARY_PATH and the CMakeError.log of 3.15-rc1?
Having taken on board the "can I compile an MPI code" statement, I tried to
compile and run the Intel MPI tests. No luck. To cut to the chase, I have
done several things:
There is a
source
/opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/bin/mpivars.csh
intel64 # the MPI libraries
to get all the environment variables set . I was unaware of this when I
started with the Intel compiler.
I am not convinced that libfabric that Intel supplied was correct. I
rebuilt and installed it from the supplied src from Intel and the MPI tests
worked. This may have been wrong in part to 1 above.
I upgraded to cmake 3.15-rc1.
Things appear to be working OK.
So sorry if I've done a whole bunch of really stupid stuff and wasted your
time over this one.
The /opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/test was most
useful.
@john.verdicchio1 No worries. libfabric with Intel MPI 2019 is generally a bit dodgy. The "optimized" source they ship is usually based off some pre-release or alpha versions and I've had issues with the shipped binaries in the past myself, albeit not during linking.
I'm going to close this issue then, as it turned out to be environmental issues.