| Peer-Reviewed

Analysis of World Experience in Creating Parallel Computing Systems Designed to Effectively Solve DIS-tasks

Received: 22 August 2019     Accepted: 23 September 2019     Published: 9 October 2019
Views:       Downloads:
Abstract

Author describes world experience in creating parallel computing systems by example Cray XE6 and network chip Gemini, designed to effectively solve Data intensive tasks (DIS-tasks). Most often, in modern supercomputers (SC), architecture options with shared (shared) memory are used to provide effective solutions to problems of high capacitive complexity, including those that contain mostly irregular work with memory. It is possible to provide support for a programming model with shared (shared) memory in various ways using hardware, as well as using virtualization software. Different options for implementing a shared memory programming model may vary in functionality and timing of memory accesses. The problem of the “memory wall” is that if arithmetic-logical operations take several processor cycles, then operations directly with the memory take several hundred cycles. If the memory is formed from the memories of computing nodes connected by a communication network, then the execution time of such a call includes the time of operation with the network to transfer addresses and data. This already increases the memory access time to several thousand cycles. The problem is that such delays in accessing data cause idle functional units of the processor - they cannot perform arithmetic and logical operations on data, because they simply do not exist due to the large delays in performing operations with memory.

Published in Journal of Electrical and Electronic Engineering (Volume 7, Issue 5)

This article belongs to the Special Issue Science Innovation

DOI 10.11648/j.jeee.20190705.11
Page(s) 101-106
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2019. Published by Science Publishing Group

Keywords

DIS-tasks, Irregular Work with Memory, Information Security, Supercomputer, Shared Memory

References
[1] Cray’s Baker pops out of the oven as company «re-learns» how to make great systems, by John West, 06. 28. 2010, http://insidehpc.com/category/business-of-hpc.
[2] Cray launches Gemini super interconnect, Timothy Prickett Morgan, Posted in HPC, 25 May 2010, www.theregister.co.uk/2010/05/25/cray_xe6_baker_gemini/page2.html.
[3] Cray Unveils "Baker" Supercomputer, by Michael Feldman, HPCwire, May 25, 2010, www.hpcwire.com/features/94828804.html.
[4] Shahbazi Karim, Eshghi Mohammad, Mirzaee Reza Faghih. Design and Implementation of ASIP-based cryptography processor for AES, IDEA, and MD5. Engineering Science and Technology, an International Journal, 20, 2017, 1308-1317.
[5] NSF Awards PSC $2.8M toward the Purchase of World’s Largest Coherent Shared-Memory System, July 29, 2010.
[6] SGI ASIC is first step to exascale system, Rick Merritt, 9/9/2010, www.eetimes.com/electronics-news/4207500/SGI-ASIC-is-first-step-to-exascale-system.
[7] SGI Colors New Shared Memory Machines Ultraviolet, by Michael Feldman, HPCwire, November 16, 2009, www.hpcwire.com/features/SGI-Colors-New-Shared-Memory-Machines-Ultraviolet-70198797.html.
[8] Molyakov, А. S. New Multilevel Architecture of Secured Supercomputers/A. S. Molyakov//Current Trends in Computer Sciences & Applications 1 (3) – 2019. – PP. 57-59. – ISSN: 2643-6744 – https://lupinepublishers.com/computer-science-journal/special-issue/CTCSA.MS.ID.000112.pdf. – DOI: 10.32474/CTCSA.2019.01.000112.
[9] Molyakov, A. S. Technological Methods Analysis in the Field of Exaflops Supercomputers Development Approaching/A. S. Molyakov, L. K. Eisymont//Global Journal of Computer Science and Technology: Information & Technology. – 2017. – № 1 (17). – РР. 37-44.
[10] Molyakov, A. S. A Prototype Computer with Non-von Neumann Architecture Based on Strategic Domestic J7 Microprocessor/A. S. Molyakov//Automatic Control and Computer Sciences. – 2016. – № 50 (8). – РР. 682-686.
[11] Molyakov, A. S. Token Scanning as a New Scientific Approach in the Creation of Protected Systems: A New Generation OS MICROTEK/A. S. Molyakov//Automatic Control and Computer Sciences. – 2016. – № 50 (8). – РР. 687-692.
[12] Molyakov, A. S. Model of hidden IT security threats in the cloud computing environment/A. S. Molyakov, V. S. Zaborovsky, A. A. Lukashin//Automatic Control and Computer Sciences. – 2015. – № 49 (8). – РР. 741-744.
[13] Alam S. R., Barrett R. F., McCurdy C. B., Roth P. C., Vetter J. S. Characterizing Applications on the Cray MTA-2 Multithreading Architecture. ORNL, Cray User Conference, 2006, 13 pp.
[14] Taylor M. B. Bitcoin and the Age of Bespoke Silicon. Proc Int’l Conf. Compilers, Architectures and Systems for Embedded Systems. 2013, 9 pp.
[15] Trader T. STARnet Alliance Seeks Revolution in Chip Design. HPCWire, January 23, 2013.
Cite This Article
  • APA Style

    Andrey Molyakov. (2019). Analysis of World Experience in Creating Parallel Computing Systems Designed to Effectively Solve DIS-tasks. Journal of Electrical and Electronic Engineering, 7(5), 101-106. https://doi.org/10.11648/j.jeee.20190705.11

    Copy | Download

    ACS Style

    Andrey Molyakov. Analysis of World Experience in Creating Parallel Computing Systems Designed to Effectively Solve DIS-tasks. J. Electr. Electron. Eng. 2019, 7(5), 101-106. doi: 10.11648/j.jeee.20190705.11

    Copy | Download

    AMA Style

    Andrey Molyakov. Analysis of World Experience in Creating Parallel Computing Systems Designed to Effectively Solve DIS-tasks. J Electr Electron Eng. 2019;7(5):101-106. doi: 10.11648/j.jeee.20190705.11

    Copy | Download

  • @article{10.11648/j.jeee.20190705.11,
      author = {Andrey Molyakov},
      title = {Analysis of World Experience in Creating Parallel Computing Systems Designed to Effectively Solve  DIS-tasks},
      journal = {Journal of Electrical and Electronic Engineering},
      volume = {7},
      number = {5},
      pages = {101-106},
      doi = {10.11648/j.jeee.20190705.11},
      url = {https://doi.org/10.11648/j.jeee.20190705.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.jeee.20190705.11},
      abstract = {Author describes world experience in creating parallel computing systems by example Cray XE6 and network chip Gemini, designed to effectively solve Data intensive tasks (DIS-tasks). Most often, in modern supercomputers (SC), architecture options with shared (shared) memory are used to provide effective solutions to problems of high capacitive complexity, including those that contain mostly irregular work with memory. It is possible to provide support for a programming model with shared (shared) memory in various ways using hardware, as well as using virtualization software. Different options for implementing a shared memory programming model may vary in functionality and timing of memory accesses. The problem of the “memory wall” is that if arithmetic-logical operations take several processor cycles, then operations directly with the memory take several hundred cycles. If the memory is formed from the memories of computing nodes connected by a communication network, then the execution time of such a call includes the time of operation with the network to transfer addresses and data. This already increases the memory access time to several thousand cycles. The problem is that such delays in accessing data cause idle functional units of the processor - they cannot perform arithmetic and logical operations on data, because they simply do not exist due to the large delays in performing operations with memory.},
     year = {2019}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Analysis of World Experience in Creating Parallel Computing Systems Designed to Effectively Solve  DIS-tasks
    AU  - Andrey Molyakov
    Y1  - 2019/10/09
    PY  - 2019
    N1  - https://doi.org/10.11648/j.jeee.20190705.11
    DO  - 10.11648/j.jeee.20190705.11
    T2  - Journal of Electrical and Electronic Engineering
    JF  - Journal of Electrical and Electronic Engineering
    JO  - Journal of Electrical and Electronic Engineering
    SP  - 101
    EP  - 106
    PB  - Science Publishing Group
    SN  - 2329-1605
    UR  - https://doi.org/10.11648/j.jeee.20190705.11
    AB  - Author describes world experience in creating parallel computing systems by example Cray XE6 and network chip Gemini, designed to effectively solve Data intensive tasks (DIS-tasks). Most often, in modern supercomputers (SC), architecture options with shared (shared) memory are used to provide effective solutions to problems of high capacitive complexity, including those that contain mostly irregular work with memory. It is possible to provide support for a programming model with shared (shared) memory in various ways using hardware, as well as using virtualization software. Different options for implementing a shared memory programming model may vary in functionality and timing of memory accesses. The problem of the “memory wall” is that if arithmetic-logical operations take several processor cycles, then operations directly with the memory take several hundred cycles. If the memory is formed from the memories of computing nodes connected by a communication network, then the execution time of such a call includes the time of operation with the network to transfer addresses and data. This already increases the memory access time to several thousand cycles. The problem is that such delays in accessing data cause idle functional units of the processor - they cannot perform arithmetic and logical operations on data, because they simply do not exist due to the large delays in performing operations with memory.
    VL  - 7
    IS  - 5
    ER  - 

    Copy | Download

Author Information
  • Institute of Information Technologies and Cybersecurity, Russian State University for the Humanities, Moscow, Russia

  • Sections