[c409-063.stampede2.tacc.utexas.edu:265823] mca_base_component_repository_open: unable to open mca_state_app: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_state_app.so: failed to map segment from shared object (ignored) [c409-063.stampede2.tacc.utexas.edu:265823] mca_base_component_repository_open: unable to open mca_state_hnp: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_state_hnp.so: failed to map segment from shared object (ignored) [c409-063.stampede2.tacc.utexas.edu:265823] mca_base_component_repository_open: unable to open mca_state_novm: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_state_novm.so: failed to map segment from shared object (ignored) [c409-063.stampede2.tacc.utexas.edu:265823] mca_base_component_repository_open: unable to open mca_state_orted: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_state_orted.so: failed to map segment from shared object (ignored) [c409-063.stampede2.tacc.utexas.edu:265823] mca_base_component_repository_open: unable to open mca_state_tool: /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_state_tool.so: failed to map segment from shared object (ignored) [c409-063.stampede2.tacc.utexas.edu:265823] [[8400,1],0] ORTE_ERROR_LOG: Error in file ess_singleton_module.c at line 320 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_state_base_select failed --> Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_init failed --> Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "Error" (-1) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [c409-063.stampede2.tacc.utexas.edu:265823] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! Command exited with non-zero status 1 {"realtime":26.20,"usertime":31.31,"systime":32.31,"memmax":129408,"memavg":0}