A compiler divides a program into a series of code blocks called transactions. For doing the division, the compiler assumes that these transactions do not have memory dependencies. These blocks then execute optimistically on the processors. The hardware enforces correct execution and uses caches to detect when a memory dependence violation between threads occurs.
This is the first paper, that proposed to use caches and cache coherence to maintain ordering among speculatively parallelized regions of a sequential code in the presence of unknown memory dependencies. While the paper did not directly address explicitly parallel programming, it set the groundwork for using caches and coherence protocols for future transactional memory proposals.
Implementation
Firstly, the compiler divides a program into sequence of "mostly independent" series of blocks (transactions). Than those blocks runs on shared-memory multiprocessors with own register state. All write memory operations cached in confirm cache. Every processor also stores all read operations in special dependency cache. Also write operations from other processors detected and used to update dependency cache (so called, bus sneaking).
All transactions are committed one-by-one as prescribed by original single-threaded program. Stored dependencies used to detect write conflicts. If one occurs, failed transaction simply runs again.
Classification
Strong or weak isolation | Strong | |
Transaction granularity | Cache line | |
Direct or deferred update | Deferred (in cache) | |
Concurrency control | Optimistic. Commit serialized globally | |
Conflict detection | Late write-write conflict Late write-read conflict | |
Inconsistent reads | No | |
Conflict resolution | Program order (sequential program) | |
Nested transaction | N/A |
Sources
- J.R. Larus, R.Rajwar Transactional Memory, 4.3.2
- T. F. Knight, “An architecture for mostly functional languages,” In Proc. ACM Lisp and Functional Programming Conference, pp. 105–112 Aug. 1986
No comments:
Post a Comment