A lockup-free cache is a common requirement for most latency-hiding techniques including prefetching relaxed consistency,,, ,, models non-blocking loads and multithreading. The complexity of implementing a lockup-free cache depends on which of these. Techniques it is intended to support (as described in detail by 52 Laudon []). For, exampleIf the goal is simply to support multiple, outstanding prefetches then it is not strictly necessary for the processor. To maintain state on outstanding transactions as long, as the cache is prepared to receive prefetch responses from outside. The processor while the processor may be simultaneously issuing new requests, In contrast.Supporting multiple outstanding stores (as with relaxed consistency models) or loads (if they are non-blocking) does require. That special state be maintained for each outstanding access. For stores the stored, data must be merged into the cache. Line when it returns, For loads.The requested data must be forwarded directly to a register-thus requiring state to associate each outstanding access. With the register (s) waiting for the value-and any future uses of that register must interlock if the value has not returned. Yet.
.
การแปล กรุณารอสักครู่..
