.. _mpi_neighbor_allgather: MPI_Neighbor_allgather ====================== .. include_body :ref:`MPI_Neighbor_allgather`, :ref:`MPI_Ineighbor_allgather`, :ref:`MPI_Neighbor_allgather_init` |mdash| Gathers and distributes data from and to all neighbors .. The following directive tells the man page generation script to generate multiple bindings for this file. .. mpi-bindings: MPI_Neighbor_allgather, MPI_Ineighbor_allgather, MPI_Neighbor_allgather_init .. The following file was automatically generated .. include:: ./bindings/mpi_neighbor_allgather.rst INPUT PARAMETERS ---------------- * ``sendbuf``: Starting address of send buffer (choice). * ``sendcount``: Number of elements in send buffer (integer). * ``sendtype``: Datatype of send buffer elements (handle). * ``recvbuf``: Starting address of recv buffer (choice). * ``recvcount``: Number of elements received from any process (integer). * ``recvtype``: Datatype of receive buffer elements (handle). * ``comm``: Communicator (handle). OUTPUT PARAMETERS ----------------- * ``recvbuf``: Address of receive buffer (choice). * ``request``: Request (handle, non-blocking and persistent only). * ``ierror``: Fortran only: Error status (integer). DESCRIPTION ----------- :ref:`MPI_Neighbor_allgather` is similar to :ref:`MPI_Allgather`, except that only the neighboring processes receive the result, instead of all processes. The neighbors and buffer layout is determined by the topology of *comm*. The type signature associated with sendcount, sendtype at a process must be equal to the type signature associated with recvcount, recvtype at any other process. NEIGHBOR ORDERING ----------------- For a distributed graph topology, created with :ref:`MPI_Dist_graph_create`, the sequence of neighbors in the send and receive buffers at each process is defined as the sequence returned by :ref:`MPI_Dist_graph_neighbors` for destinations and sources, respectively. For a general graph topology, created with :ref:`MPI_Graph_create`, the order of neighbors in the send and receive buffers is defined as the sequence of neighbors as returned by :ref:`MPI_Graph_neighbors`. Note that general graph topologies should generally be replaced by the distributed graph topologies. For a Cartesian topology, created with :ref:`MPI_Cart_create`, the sequence of neighbors in the send and receive buffers at each process is defined by order of the dimensions, first the neighbor in the negative direction and then in the positive direction with displacement 1. The numbers of sources and destinations in the communication routines are 2*ndims with ndims defined in :ref:`MPI_Cart_create`. If a neighbor does not exist, i.e., at the border of a Cartesian topology in the case of a non-periodic virtual grid dimension (i.e., periods[...]==false), then this neighbor is defined to be ``MPI_PROC_NULL``. If a neighbor in any of the functions is ``MPI_PROC_NULL``, then the neighborhood collective communication behaves like a point-to-point communication with ``MPI_PROC_NULL`` in this direction. That is, the buffer is still part of the sequence of neighbors but it is neither communicated nor updated. NOTES ----- The MPI_IN_PLACE option for *sendbuf* is not meaningful for this operation. ERRORS ------ .. include:: ./ERRORS.rst .. seealso:: * :ref:`MPI_Neighbor_allgatherv` * :ref:`MPI_Cart_create` * :ref:`MPI_Graph_create` * :ref:`MPI_Dist_graph_create` * :ref:`MPI_Gather`