本文转载自:http://en.wikipedia/wiki/Non-Uniform_Memory_Access


Non-uniform memory access

From Wikipedia, the free encyclopedia   (Redirected from Non-Uniform Memory Access) Jump to: navigation, search

Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing, where the memory access time depends on the memory location relative to the processor. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.[1]

NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP) architectures. They were developed commercially during the 1990s by Burroughs (later Unisys), Convex Computer (later Hewlett-Packard), Honeywell Information Systems Italy (HISI) (later Groupe Bull), Silicon Graphics (later Silicon Graphics International), Sequent Computer Systems (later IBM), Data General (later EMC), and Digital (later Compaq, now HP). Techniques developed by these companies later featured in a variety of Unix-like operating systems, and to an extent in Windows NT.

The first commercial implementation of a NUMA-based Unix system was the Symmetrical Multi Processing XPS-100 family of servers, designed by Dan Gielan of VAST Corporation for Honeywell Information Systems Italy.

Contents

 [hide] 
  • 1 Basic concept
  • 2 Cache coherent NUMA (ccNUMA)
  • 3 NUMA vs. cluster computing
  • 4 Software support
  • 5 See also
  • 6 References
  • 7 External links

Basic concept[edit]

One possible architecture of a NUMA system. The processors connect to the bus or crossbar by connections of varying thickness/number. This shows that different CPUs have different access priorities to memory based on their relative location.

Modern CPUs operate considerably faster than the main memory they use. In the early days of computing and data processing, the CPU generally ran slower than its own memory. The performance lines of processors and memory crossed in the 1960s with the advent of the first supercomputers. Since then, CPUs increasingly have found themselves "starved for data" and having to stall while waiting for data to arrive from memory. Many supercomputer designs of the 1980s and 1990s focused on providing high-speed memory access as opposed to faster processors, allowing the computers to work on large data sets at speeds other systems could not approach.

Limiting the number of memory accesses provided the key to extracting high performance from a modern computer. For commodity processors, this meant installing an ever-increasing amount of high-speed cache memory and using increasingly sophisticated algorithms to avoid cache misses. But the dramatic increase in size of the operating systems and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems without NUMA make the problem considerably worse. Now a system can starve several processors at the same time, notably because only one processor can access the computer's memory at a time.[2]

NUMA attempts to address this problem by providing separate memory for each processor, avoiding the performance hit when several processors attempt to address the same memory. For problems involving spread data (common for servers and similar applications), NUMA can improve the performance over a single shared memory by a factor of roughly the number of processors (or separate memory banks).[3] Another approach to addressing this problem, utilized mainly by non-NUMA systems, is the multi-channel memory architecture; multiple memory channels are increasing the number of simultaneous memory accesses.[4]

Of course, not all data ends up confined to a single task, which means that more than one processor may require the same data. To handle these cases, NUMA systems include additional hardware or software to move data between memory banks. This operation slows the processors attached to those banks, so the overall speed increase due to NUMA depends heavily on the nature of the running tasks.[3]

Intel announced NUMA compatibility for its x86 and Itanium servers in late 2007 with its Nehalem and Tukwila CPUs.[5] Both CPU families share a common chipset; the interconnection is called Intel Quick Path Interconnect (QPI).[6] AMD implemented NUMA with its Opteron processor (2003), using HyperTransport. Freescale's NUMA for PowerPC is called CoreNet.

Cache coherent NUMA (ccNUMA)[edit]

Topology of a ccNUMA Bulldozer server.

Nearly all CPU architectures use a small amount of very fast non-shared memory known as cache to exploit locality of reference in memory accesses. With NUMA, maintaining cache coherence across shared memory has a significant overhead. Although simpler to design and build, non-cache-coherent NUMA systems become prohibitively complex to program in the standard von Neumann architecture programming model.[7]

Typically, ccNUMA uses inter-processor communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location. For this reason, ccNUMA may perform poorly when multiple processors attempt to access the same memory area in rapid succession. Support for NUMA in operating systems attempts to reduce the frequency of this kind of access by allocating processors and memory in NUMA-friendly ways and by avoiding scheduling and locking algorithms that make NUMA-unfriendly accesses necessary.[8]

Alternatively, cache coherency protocols such as the MESIF protocol attempt to reduce the communication required to maintain cache coherency. Scalable Coherent Interface (SCI) is an IEEE standard defining a directory-based cache coherency protocol to avoid scalability limitations found in earlier multiprocessor systems. For example, SCI is used as the basis for the NumaConnect technology.[9][10]

As of 2011, ccNUMA systems are multiprocessor systems based on the AMD Opteron processor, which can be implemented without external logic, and the Intel Itanium processor, which requires the chipset to support NUMA. Examples of ccNUMA-enabled chipsets are the SGI Shub (Super hub), the Intel E8870, the HP sx2000 (used in the Integrity and Superdome servers), and those found in NEC Itanium-based systems. Earlier ccNUMA systems such as those from Silicon Graphics were based on MIPS processors and the DEC Alpha 21364 (EV7) processor.

NUMA vs. cluster computing[edit]

One can view NUMA as a tightly coupled form of cluster computing. The addition of virtual memory paging to a cluster architecture can allow the implementation of NUMA entirely in software. However, the inter-node latency of software-based NUMA remains several orders of magnitude greater (slower) than that of hardware-based NUMA.[1]

Software support[edit]

Since NUMA largely influences memory access performance, certain software optimizations are needed to allow scheduling threads and processes close to their data.

  • Microsoft Windows 7 and Windows Server 2008 R2 add support for NUMA architecture over 64 logical cores.[11]
  • Java 7 added support for NUMA-aware memory allocator and garbage collector.[12]
  • The Linux kernel 2.5 already had basic support built-in,[13] which was further extended in subsequent releases. Linux kernel version 3.8 brought a new NUMA foundation which allowed more efficient NUMA policies to be built in the next kernel releases.[14][15] Linux kernel version 3.13 brought numerous policies that attempt to put a process near its memory, together with handling of cases such as shared pages between processes, or transparent huge pages; new sysctl settings are allowing NUMA balancing to be enabled or disabled, as well as various NUMA memory balancing parameters to be configured.[16][17][18]
  • OpenSolaris models NUMA architecture with lgroups.

See also[edit]

  • Uniform memory access (UMA)
  • Cluster computing
  • Symmetric multiprocessing (SMP)
  • Cache only memory architecture (COMA)
  • Scratchpad memory (SPM)
  • Supercomputer
  • Silicon Graphics (SGI)
  • HiperDispatch
  • Intel QuickPath Interconnect (QPI)
  • HyperTransport

References[edit]

  1. ^ Jump up to: a b Nakul Manchanda; Karan Anand (2010-05-04). "Non-Uniform Memory Access (NUMA)". New York University. Retrieved 2014-01-27. 
  2. Jump up ^ Sergey Blagodurov; Sergey Zhuravlev; Mohammad Dashti; Alexandra Fedorov (2011-05-02). "A Case for NUMA-aware Contention Management on Multicore Systems" (PDF). Simon Fraser University. Retrieved 2014-01-27. 
  3. ^ Jump up to: a b Zoltan Majo; Thomas R. Gross (2011). "Memory System Performance in a NUMA Multicore Multiprocessor" (PDF). ACM. Retrieved 2014-01-27. 
  4. Jump up ^ "Intel Dual-Channel DDR Memory Architecture White Paper" (PDF, 1021 KB) (Rev. 1.0 ed.). Infineon Technologies North America and Kingston Technology. September 2003. Archived from the original on 2011-09-29. Retrieved 2007-09-06. 
  5. Jump up ^ Intel Corp. (2008). Intel QuickPath Architecture [White paper]. Retrieved from http://www.intel/pressroom/archive/reference/whitepaper_QuickPath.pdf
  6. Jump up ^ Intel Corporation. (September 18th, 2007). Gelsinger Speaks To Intel And High-Tech Industry's Rapid Technology Caden[Press release]. Retrieved from http://www.intel/pressroom/archive/releases/2007/20070918corp_b.htm
  7. Jump up ^ "ccNUMA: Cache Coherent Non-Uniform Memory Access". slideshare. 2014. Retrieved 2014-01-27. 
  8. Jump up ^ Per Stenstromt; Truman Joe; Anoop Gupta (2002). "Comparative Performance Evaluation of Cache-Coherent NUMA and COMA Architectures" (PDF). ACM. Retrieved 2014-01-27. 
  9. Jump up ^ David B. Gustavson (September 1991). "The Scalable Coherent Interface and Related Standards Projects". SLAC Publication 5656. Stanford Linear Accelerator Center. Retrieved January 27, 2014. 
  10. Jump up ^ "The NumaChip enables cache coherent low cost shared memory". Numascale. Retrieved 2014-01-27. 
  11. Jump up ^ NUMA Support (MSDN)
  12. Jump up ^ Java HotSpot™ Virtual Machine Performance Enhancements
  13. Jump up ^ "Linux Scalability Effort: NUMA Group Homepage". sourceforge. 2002-11-20. Retrieved 2014-02-06. 
  14. Jump up ^ "1.8. Automatic NUMA balancing". Linux 3.8. kernelnewbies. 2013-02-08. Retrieved 2014-02-06. 
  15. Jump up ^ Jonathan Corbet (2012-11-14). "NUMA in a hurry". LWN. Retrieved 2014-02-06. 
  16. Jump up ^ "1.6. Improved performance in NUMA systems". Linux 3.13. kernelnewbies. 2014-01-19. Retrieved 2014-02-06. 
  17. Jump up ^ "Documentation/sysctl/kernel.txt". Linux kernel documentation. kernel. Retrieved 2014-02-06. 
  18. Jump up ^ Jonathan Corbet (2013-10-01). "NUMA scheduling progress". LWN. Retrieved 2014-02-06. 

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.

External links[edit]

  • NUMA FAQ
  • Page-based distributed shared memory
  • OpenSolaris NUMA Project
  • Introduction video for the Alpha EV7 system architecture
  • More videos related to EV7 systems: CPU, IO, etc
  • NUMA optimization in Windows Applications
  • NUMA Support in Linux at SGI
  • Intel Tukwila
  • Intel QPI (CSI) explained
  • current Itanium NUMA systems
[hide]
  • v
  • t
  • e
Parallel computing
General
  • Cloud computing
  • High-performance computing
  • Cluster computing
  • Distributed computing
  • Grid computing
 
Levels
  • Bit
  • Instruction
  • Data
  • Task
 
Multithreading
  • Temporal multithreading
  • Simultaneous multithreading
    • Hyper-threading
 
Theory
  • Amdahl's law
  • Gustafson's law
  • Cost efficiency
  • Karp–Flatt metric
  • slowdown
  • speedup
 
Elements
  • Process
  • Thread
  • Fiber
  • PRAM
  • Instruction window
 
Coordination
  • Multiprocessing
  • Memory coherency
  • Cache coherency
  • Cache invalidation
  • Barrier
  • Synchronization
  • Application checkpointing
 
Programming
  • Models
    • Implicit parallelism
    • Explicit parallelism
    • Concurrency
  • Flynn's taxonomy
    • SISD
    • SIMD
    • MISD
    • MIMD
      • SPMD
  • Thread
  • Non-blocking algorithm
 
Hardware
  • Multiprocessor
    • Symmetric
    • Asymmetric
  • Memory
    • NUMA
    • COMA
    • distributed
    • shared
    • distributed shared
  • MPP
  • Superscalar
  • Vector processor
  • Supercomputer
  • Beowulf cluster
 
APIs
  • Ateji PX
  • POSIX Threads
  • OpenMP
  • OpenHMPP
  • OpenACC
  • PVM
  • MPI
  • UPC
  • TBB
  • Boost.Thread
  • Global Arrays
  • Charm++
  • Cilk/Cilk Plus
  • Coarray Fortran
  • OpenCL
  • CUDA
  • Dryad
  • C++ AMP
  • PLINQ
  • TPL
 
Problems
  • Embarrassingly parallel
  • Software lockout
  • Scalability
  • Race condition
  • Deadlock
  • Livelock
  • Starvation
  • Deterministic algorithm
  • Parallel slowdown
 
  •  Category: parallel computing
  • Media related to parallel computing at Wikimedia Commons
<img src="//en.wikipedia/wiki/Special:CentralAutoLogin/start?type=1x1" alt="" title="" width="1" height="1" style="border: none; position: absolute;" /> Retrieved from " http://en.wikipedia/w/index.php?title=Non-uniform_memory_access&oldid=611303860" Categories:
  • Parallel computing
  • Computer memory

注:近期参加MySQL运维学习,老师推荐该文章作为学习和技术提高的扩展阅读,先记录到自己的博客中,随后慢慢消化、学习、提高。本文与MySQL数据库 “性能优化”主题有关。


更多推荐

Non-uniform memory access