350 rub
Journal Achievements of Modern Radioelectronics №1 for 2012 г.
Article in number:
The implementation of hardware support for collective operations in high speed interconnect with multi-dimensional torus topology
Authors:
E. L. Syromyatnikov, D. V. Makagon, S. I. Paruta, A. A. Rumyantsev
Abstract:
Collective operations are used in a broad variety of tasks for inter-node communications. Typical examples of collective operations include broadcast (sending the same set of data from a single node to a set of nodes), reduce (gathering the data from a set of nodes applying the commutative associative binary operation to that data, the result being sent to the given single node), scatter (distributing the array of data from a single node to a set of nodes, every node receiving its part of the array), gather (sending parts of the data from a set of nodes to a single node, so it receives the complete array of data), allreduce (same as reduce, but the result is sent to all nodes which performed the operation), allgather (same as gather, but the data is gathered by every node in the set), alltoall (distributing the array of data from every node to all nodes in the set). Collective operations are among the basic communication primitives in the majority of parallel programming standards (MPI, Shmem, PGAS languages - UPC, X10). Collectives can constitute a significant part of the inter-node communications in many applications (at least for those, that use linear algebra, graphs, structured and unstructured grids). Although there is a straight-forward implementation of the collectives through the point-to-point operations, a more sophisticated version, which relies on hardware support, allows for significant increase of performance and scalability of the parallel programs through the data aggregation and avoidance of duplicate traffic. JSC "NICEVT" develops the high-speed intercommunication network with multidimensional torus topology. The hardware support for collective broadcast and reduce is implemented through the addition of 2 virtual subnetworks with tree topology. The tree has a root, according to which the two possible directions of movement are introduced: towards the root and away from the root. Each direction has a corresponding virtual channel. The tree is constructed according to the XYZW order of the dimensions. This allows to avoid deadlocks between different intersecting trees. Auxiliary transit nodes that don't logically belong to the tree can be used to make the tree connected (when it is not possible otherwise to build the connected tree that complies to the XYZW-order rule). The implemented collective operations are asynchronous and one-sided, i. e. the control is returned to the processor as soon as the operations are injected into the network and the result is stored into the memory of the receiving side without the involvement of the processor. This allows overlapping computation and communications. Currently the 3rd generation interconnect prototype (M3) is ready and working. It consists of 9 nodes connected in 3x3 2-dimensional torus. The debugging and fine-tuning of the collective operations is now at its final stage.
Pages: 11-15
References
  1. Fox, G. C.,Johnson, M. A., Lyzenga, G. A., Otto, S. W., Salmon, J. K. and Walker, D. W., Solving problems on concurrent processors. V. 1: General techniques and regular problems, Prentice-Hall, Inc. 1988.
  2. Message Passing Interface Forum, MPI: A Message-Passing Interface Standard, 1995, http://www.mpi-forum.org/docs/mpi-11-html/node64.html
  3. Feind, K., Research, C., Shared Memory Access (SHMEM) Routines. 1995. http://www.cug.org/5-publications/proceedings_attendee_lists/1997CD/S95PROC/303_308.PDF
  4. Wiebel, E., Greenberg, D., Steve Seidel, UPC Collective Operations Specifications. 2003. http://upc.gwu.edu/docs/UPC_Coll_Spec_V1.0.pdf
  5. Saraswat, V., Bloom, B., Peshansky, I., Tardieu, O., Grove, D., X10 Language Specification. 2011. http://dist.codehaus.org/x10/documentation/languagespec/x10-latest.pdf
  6. Bala, V., Bruck, J., Cypher, R., Elustondo, P., Ho, A.,
    Ching-Tien Ho, Kipnis, Sh.,  Snir, M.,
    CCL: A Portable and Tunable Collective Communication Library for Scalable Parallel Computers, Parallel Processing Symposium. С. 835-844. Proceedings. 1994. http://citeseer.ist.psu.edu/viewdoc/download;jsessionid=3D465188A4C42E5F58002758BE8B57C3-doi=10.1.1.155.3612&rep=rep1&type=pdf
  7. Almási, G., Dózsa, G., Erway, C., Steinmacher-Burow, B., Efficient Implementation of Allreduce on BlueGene / L Collective Network, Recent Advances in Parallel Virtual Machine and Message Passing Interface: с. 57-66, Springer Berlin / Heidelberg. 2005. http://dx.doi.org/10.1007/11557265_12
  8. Корж А. А., Макагон Д. В., Бородин А. А., Жабин И. А., Куштанов Е. Р., Сыромятников Е. Л., Черемушкина Е. В. Отечественная коммуникационная сеть 3D-тор с поддержкой глобально адресуемой памяти для суперкомпьютеров транспетафлопсного уровня производительности, Параллельные вычислительные технологии (ПаВТ-2010) // Труды Междунар. науч. конф. (Уфа, 29 марта - 2 апреля 2010 г.): С. 227-237. Челябинск: Издательский центр ЮУрГУ. 2010.
    http://omega.sp.susu.ac.ru/books/conference/PaVT2010/full/134.pdf
  9. Симонов А. С., Жабин И. А., Макагон Д. В. Разработка межузловой коммуникационной сети с топологией «многомерный тор» и поддержкой глобально адресуемой памяти для перспективных отечественных суперкомпьютеров // Научно-техн. конф. «Перспективные направления развития вычислительной техники». ОАО «НИЦЭВТ». 2011.