[c426-123.stampede2.tacc.utexas.edu:31927] mca_base_component_repository_open: unable to open mca_ess_singleton: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_ess_singleton.so: failed to map segment from shared object (ignored) -------------------------------------------------------------------------- A requested component was not found, or was unable to be opened. This means that this component is either not installed or is unable to be used on your system (e.g., sometimes this means that shared libraries that the component requires are unable to be found/loaded). Note that Open MPI stopped checking at the first component that it did not find. Host: c426-123.stampede2.tacc.utexas.edu Framework: ess Component: singleton -------------------------------------------------------------------------- [c426-123.stampede2.tacc.utexas.edu:31927] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 247 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_base_open failed --> Returned value Not found (-13) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "Not found" (-13) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [c426-123.stampede2.tacc.utexas.edu:31927] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! Command exited with non-zero status 1 {"realtime":5.68,"usertime":3.98,"systime":12.18,"memmax":77300,"memavg":0}