-------------------------------------------------------------------------- By default, for Open MPI 4.0 and later, infiniband ports on a device are not used by default. The intent is to use UCX for these devices. You can override this policy by setting the btl_openib_allow_ib MCA parameter to true. Local host: c013 Local adapter: mlx5_0 Local port: 1 -------------------------------------------------------------------------- -------------------------------------------------------------------------- WARNING: There is at least non-excluded one OpenFabrics device found, but there are no active ports detected (or Open MPI was unable to use them). This is most certainly not what you wanted. Check your cables, subnet manager configuration, etc. The openib BTL will be ignored for this job. Local host: c013 -------------------------------------------------------------------------- -------------------------------------------------------------------------- By default, for Open MPI 4.0 and later, infiniband ports on a device are not used by default. The intent is to use UCX for these devices. You can override this policy by setting the btl_openib_allow_ib MCA parameter to true. Local host: c013 Local adapter: mlx5_0 Local port: 1 -------------------------------------------------------------------------- -------------------------------------------------------------------------- WARNING: There is at least non-excluded one OpenFabrics device found, but there are no active ports detected (or Open MPI was unable to use them). This is most certainly not what you wanted. Check your cables, subnet manager configuration, etc. The openib BTL will be ignored for this job. Local host: c013 -------------------------------------------------------------------------- -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- Traceback (most recent call last): File "../../td/GrainBoundaryCubicCrystalSymmetricTiltRelaxedEnergyVsAngle__TD_410381120771_003/runner", line 101, in thetas, relaxed_energies, minimum_distances, sigmas = compute_gb_tilt_energy( File "/mmfs1/scratch/bwaters2/bwaters/job-1db981f8-45c3-4dab-b636-6c7bd5e39917-007-5a330eca-cc27-423e-ad7d-2c8c62d180e5/TE_629863933937_000-and-MO_959249795837_003-1701369824/staged_job_files/repository/td/GrainBoundaryCubicCrystalSymmetricTiltRelaxedEnergyVsAngle__TD_410381120771_003/compute_gb_tilt_energy.py", line 473, in compute_gb_tilt_energy raise RuntimeError( RuntimeError: Error: LAMMPS did not generate a dump file -- something probably went wrong. Command exited with non-zero status 1 {"realtime":1.60,"usertime":3.16,"systime":13.71,"memmax":62604,"memavg":0}