Proxy Pattern in UVM: Controlling Access with Register Models and Sequencers

This is the fourth and final post in the Structural Patterns mini-series. The previous three posts covered the Adapter Pattern (translating register operations to APB bus transactions), the Decorator Pattern (wrapping analysis subscribers to layer in coverage and checking), and the Facade Pattern (hiding multi-agent complexity behind virtual sequences). This post reuses the memory subsystem introduced in the Facade post — the same CPU agent, DMA agent, and memory controller — so if you read that post, the environment will look familiar.

You are testing a memory controller. You need to configure timing registers and kick off a DMA transfer. One approach: sprinkle raw poke() calls with magic addresses directly in the test. Another: grab the DMA and CPU sequencers simultaneously with no coordination. Both approaches work — until they don't. Random failures on regression night, protocol errors that only show up under load, tests that pass in isolation and fail in parallel — these are the symptoms. The disease is unmediated access to shared resources.

The Problem: When Tests Touch Registers and Sequences Collide

Verification engineers learn UVM by writing sequences. Eventually someone needs to configure a register, and the fastest path is a direct address write. Eventually two sequences need the same sequencer, and the fastest path is starting both and hoping the arbitration gods are kind. These shortcuts feel harmless at first. They are not.

Problem A — Direct Register Access

Suppose you need to configure the memory controller's burst length and enable the DMA engine before a transfer. The naive approach reaches directly into the address map:

// BAD: raw address writes scattered across the test
class dma_basic_test extends uvm_test;
  task run_phase(uvm_phase phase);
    // Configure burst length register at 0x4000_0010
    mem_agent.driver.poke(32'h4000_0010, 32'h0000_0008);
    // Enable DMA at 0x4000_0020
    mem_agent.driver.poke(32'h4000_0020, 32'h0000_0001);
    // Kick off transfer — another magic address
    mem_agent.driver.poke(32'h4000_0030, 32'hDEAD_C0DE);
  endtask
endclass

This works once. Then the hardware team moves the DMA enable register from offset 0x20 to 0x24 to make room for a new status field. Now every test that hard-coded 0x4000_0020 is broken — and there is no compiler warning to find them. Worse, because the write bypasses any register model mirror, a later read() on the same register returns stale data. The mirror and the hardware are out of sync, and the test has no idea.

Using force or $deposit to poke RTL signals directly is even more fragile: it bypasses the bus protocol entirely, leaves no transaction-level trace in the waveform, and breaks as soon as the RTL hierarchy is refactored.

The problems compound:

  • Magic numbers are repeated across dozens of tests with no single source of truth.
  • Protocol-level consistency (write-then-read-back, field masking, access-type enforcement) is never checked.
  • The register mirror — the model's view of what the hardware currently holds — is never updated, so predict-and-compare coverage is impossible.
  • Any address-map change requires a grep-and-replace across the entire testbench.

Problem B — Sequence Arbitration Chaos

The second problem is more subtle. The memory controller has one sequencer. The CPU sequence and the DMA sequence both need to drive transactions through it. A common first attempt starts them in parallel from a virtual sequence:

// BAD: two sequences racing on the same sequencer
class mem_parallel_vseq extends uvm_sequence;
  task body();
    cpu_seq  = cpu_mem_seq::type_id::create("cpu_seq");
    dma_seq  = dma_burst_seq::type_id::create("dma_seq");

    fork
      cpu_seq.start(mem_ctrl_seqr);  // both start on the
      dma_seq.start(mem_ctrl_seqr);  // same sequencer
    join
  endtask
endclass

UVM's default round-robin arbitration will interleave items from both sequences. For single-beat transactions that is sometimes acceptable. For a DMA burst — which must own the bus for the entire transfer without interruption — it is fatal. A CPU read can slip in between burst beats, corrupting the transfer. The failure is non-deterministic: it depends on simulation scheduling, sequence lengths, and which sequence gets the first arbitration slot. In a nightly regression it shows up as a random hang or a bus-protocol checker firing with no obvious cause.

Adding a priority argument to start() helps slightly but does not solve the problem: priority only affects which sequence wins the next arbitration slot, not whether a burst stays atomic. There is no built-in mechanism to say “this sequence must run to completion before any other sequence gets a turn.”

The Root Cause

Both problems share a common root: the test is talking directly to the resource. There is no intermediary to enforce rules, track state, or coordinate access. The fix in both cases is to introduce a Proxy — an object that sits between the caller and the resource, controls how access happens, and adds the policy the raw resource cannot provide on its own.

Both problems have the same solution — put a Proxy between the caller and the resource. For registers: uvm_reg. For sequencer access: lock() and grab(). Let's see how.

Gang of Four: The Proxy Pattern

Provide a surrogate or placeholder for another object to control access to it.

classDiagram
    class Subject {
        <>
        +request()
    }
    class RealSubject {
        +request()
    }
    class Proxy {
        -realSubject: RealSubject
        +request()
    }
    Subject <|.. RealSubject
    Subject <|.. Proxy
    Proxy --> RealSubject : delegates to

The key insight is that the Proxy presents the same interface as the real object. Callers never know they are talking to a proxy — they call request() on what they think is the real thing, and the proxy decides what to do: delegate immediately, check access rights, add logging, cache results, or defer creation. The caller's code does not change regardless of whether a proxy is in the call chain or not.

  • Virtual Proxy — defers creation of an expensive object until first use
  • Protection Proxy — checks access rights before delegating
  • Remote Proxy — local representative of a resource that lives elsewhere
  • Smart Proxy — adds behavior (logging, caching, ref counting) around delegation

Proxy vs Decorator: Both wrap an object. Decorator adds to the interface — it can add new methods or extend existing ones. Proxy uses the same interface as the real object and focuses on controlling when and how the real object is accessed, not adding new capabilities. If you find yourself adding a method that the wrapped object does not have, you are decorating. If you are gating or mediating access to the same methods, you are proxying.

Proxy vs Facade: Facade simplifies a complex subsystem behind a new, simpler interface. Proxy controls access to a single object behind the same interface. Facade changes the shape of what callers see; Proxy keeps the shape identical and changes only the access semantics.

Proxy vs Adapter: Adapter translates between two incompatible interfaces — it exists precisely because caller and callee do not agree on the interface. Proxy uses the same interface as the real object — no translation required, no impedance mismatch to bridge.

  • Expensive object initialization that should be deferred until the resource is actually needed
  • Access arbitration where multiple callers must be serialized or prioritized
  • Register abstraction that replaces raw address writes with a named, mirrored, policy-enforced interface
  • Lazy DUT configuration where register fields are accumulated and written in a single burst rather than one at a time

UVM's Proxy: uvm_reg and the Sequencer

UVM ships with two first-class Proxy implementations, and they serve two different GoF variants. uvm_reg is a Smart Proxy and a Remote Proxy rolled into one: it mirrors register state locally and delegates every actual bus operation through a chain that ends at DUT silicon. The sequencer is a Protection Proxy: it sits between every sequence and the driver, enforcing arbitration policy so that the driver never has to care about who else wants the bus. Both objects are already in every UVM testbench. Most engineers use them every day without recognizing the pattern — but once you see it, you cannot unsee it.

Proxy 1: uvm_reg as Smart/Remote Proxy

GoF Role UVM Concept
Subject A register with read/write/update
RealSubject The actual hardware register (DUT flip-flops)
Proxy uvm_reg object in the register model

When you call reg.write() or reg.read(), you are not touching hardware directly. The call enters uvm_reg, which resolves the register's offset through its uvm_reg_map, selects the appropriate uvm_reg_adapter to translate the generic register operation into a bus-specific sequence item, hands that item to a bus sequence, which the driver executes against the DUT. In full: uvm_reguvm_reg_mapuvm_reg_adapter → bus sequence → driver → DUT. Five layers of indirection, invisible to the test author.

This is the Remote Proxy half of the pattern. The “real” register is DUT silicon — physically remote from the testbench process. uvm_reg is its local software representative. The caller writes SystemVerilog objects; the proxy ensures the intent eventually reaches flip-flops on the other side of the simulation boundary.

The Smart Proxy half is what uvm_reg adds on top of plain delegation:

  • Access-type enforcement — a field declared RO will not be written even if the test calls write() on it; a field declared W1C will have the correct mask applied automatically.
  • Desired value caching — the value you want in the register, accumulated through set() calls and committed with update().
  • Mirror value caching — the proxy's best belief of what the hardware currently holds, updated after every successful read() or write().

The mirror and desired values deserve a closer look because they are the caching proxy behavior in practice. reg.get_mirrored_value() returns what the proxy believes is in hardware right now — it was last updated the last time a read or write completed successfully. reg.get() returns the desired value: what you have programmed via set() but may not yet have committed. These two values can diverge. Use get_mirrored_value() in scoreboards and checkers when you want to compare against the hardware state without issuing a bus transaction. Use get() before a update() call to confirm the value you are about to write. The proxy can answer “what is in the register” without touching the bus at all — that is caching in action, and it matters for simulation performance in register-heavy environments.

Proxy 2: Sequencer as Protection Proxy

GoF Role UVM Concept
Subject Something that can drive the bus
RealSubject The driver
Proxy The sequencer

Sequences never talk to the driver directly. Every sequence item travels through the sequencer before the driver ever sees it. start_item() and finish_item() are the sequencer's controlled handshake: start_item() requests an arbitration slot and blocks until the sequencer grants it; finish_item() hands the item to the driver and blocks until the driver signals completion. The driver simply calls get_next_item() in a loop — it is completely unaware of how many sequences are competing, or in what order they were admitted.

The sequencer exposes two mechanisms for exclusive access that make burst atomicity possible:

  • lock() — cooperative exclusive access. The calling sequence waits until the current running sequence finishes its current item, then the sequencer queues all other sequences and grants exclusive access to the locking sequence. Other sequences accumulate in the arbitration queue and resume when unlock() is called. This is the polite version: it does not interrupt work already in flight.
  • grab() — preemptive exclusive access. The calling sequence immediately suspends the currently running sequence, even mid-execution, and takes the arbitration slot. The suspended sequence is pushed back and resumes after ungrab(). This is the emergency version: use it when the accessing sequence cannot wait even one more item.

The sequencer enforces the access policy so the driver never has to. The driver's run_phase loop is a simple get_next_item() / item_done() cycle regardless of whether one sequence or ten are running. All arbitration, prioritization, locking, and queuing happen inside the sequencer proxy — the Protection Proxy doing exactly what the GoF pattern prescribes: controlling who gets access, and when.

Every RAL-based testbench already uses the Proxy pattern for registers. Every testbench with concurrent sequences already uses the Proxy pattern for sequencer arbitration. The pattern is not something you add to UVM — it is already there, baked into the methodology. What this post gives you is the mental model to reason about what is happening when a reg.write() call takes 12 simulation cycles, or when a grab() on the sequencer silently pauses another test thread. Once you recognize the proxy, you can extend it intentionally: add pre/post-write hooks in a uvm_reg subclass, build a custom arbitration scheme on top of the sequencer, or layer a Virtual Proxy in front of a register block that should not be touched until the DUT comes out of reset.

Building the Memory Subsystem Proxy

You'll recognize this setup from the Facade post — the same three-agent memory subsystem. The CPU agent drives configuration writes over APB, the DMA agent initiates burst transfers, and the memory controller agent owns the single sequencer that arbitrates access to the memory bus. In the Facade post, a virtual sequence hid that complexity. Here, we go one level deeper: we look at how the register proxy (uvm_reg) controls access to the memory controller's timing registers, and how the sequencer's lock and grab APIs protect multi-beat DMA bursts from being interrupted.

Part A: Register Proxy (uvm_reg)

The memory controller exposes a timing register that packs three fields — t_ras, t_rcd, and t_rp — into a single 32-bit word. We model it as a uvm_reg subclass and group it in a uvm_reg_block. The block is the proxy object: tests never touch the DUT register directly; they call methods on this object, and the object handles addressing, field masking, and mirror tracking.

class mem_timing_reg extends uvm_reg;
  uvm_reg_field t_ras;  // Row Active Time
  uvm_reg_field t_rcd;  // RAS to CAS Delay
  uvm_reg_field t_rp;   // Row Precharge Time

  `uvm_object_utils(mem_timing_reg)

  function new(string name = "mem_timing_reg");
    super.new(name, 32, UVM_NO_COVERAGE);
  endfunction

  virtual function void build();
    t_ras = uvm_reg_field::type_id::create("t_ras");
    t_rcd = uvm_reg_field::type_id::create("t_rcd");
    t_rp  = uvm_reg_field::type_id::create("t_rp");
    t_ras.configure(this, 8, 16, "RW", 0, 8'h06, 1, 1, 0);
    t_rcd.configure(this, 8,  8, "RW", 0, 8'h03, 1, 1, 0);
    t_rp.configure(this,  8,  0, "RW", 0, 8'h03, 1, 1, 0);
  endfunction
endclass

class mem_ctrl_reg_block extends uvm_reg_block;
  mem_timing_reg timing;
  uvm_reg_map    default_map;

  `uvm_object_utils(mem_ctrl_reg_block)

  function new(string name = "mem_ctrl_reg_block");
    super.new(name, UVM_NO_COVERAGE);
  endfunction

  virtual function void build();
    timing = mem_timing_reg::type_id::create("timing");
    timing.build();
    timing.configure(this);

    default_map = create_map("default_map", 'h0, 4, UVM_LITTLE_ENDIAN);
    default_map.add_reg(timing, 'h10, "RW");
    lock_model();
  endfunction
endclass

With the block built, wire it to the agent's sequencer in connect_phase so the proxy knows how to route bus transactions:

function void connect_phase(uvm_phase phase);
  super.connect_phase(phase);
  reg_block.default_map.set_sequencer(
    mem_ctrl_agent.sequencer,
    mem_apb_adapter);
endfunction

Now the test talks to reg_block.timing using named fields instead of magic addresses. The proxy resolves offsets, applies field masks, drives the APB bus, and updates the mirror automatically:

// Through the proxy — clean, traceable, self-documenting
uvm_status_e   status;
uvm_reg_data_t rdata;

// Set desired values
reg_block.timing.t_ras.set(8'h08);
reg_block.timing.t_rcd.set(8'h04);
reg_block.timing.t_rp.set(8'h04);
// Write all fields in one bus transaction
reg_block.timing.update(status, UVM_FRONTDOOR);

// Read back and verify mirror
reg_block.timing.read(status, rdata, UVM_FRONTDOOR);
assert(reg_block.timing.t_ras.get_mirrored_value() == 8'h08);

Understanding get_mirrored_value() vs get() is essential to using the proxy correctly. The two values track different things and diverge any time you call set() without immediately following it with update():

// After reset
reg_block.timing.reset();
// desired == mirrored == reset value (0x06)

// Set desired but don't write yet
reg_block.timing.t_ras.set(8'hFF);
// get()              == 8'hFF  (desired — what we want to write)
// get_mirrored_value() == 8'h06  (mirror — what hardware still holds)

// After update()
reg_block.timing.update(status, UVM_FRONTDOOR);
// get_mirrored_value() == 8'hFF  (mirror synced after write)

Common pitfalls with register proxies:

  • Forgetting lock_model() in build() — the register model is not usable without it; UVM issues a uvm_fatal at runtime when any register operation is attempted.
  • Not calling reset() on the reg block before the test begins — the mirror value starts as 'x, making predict-and-compare checks fail silently or produce spurious mismatches.
  • Confusing get() (desired) with get_mirrored_value() (last observed hardware state) — use the mirrored value in scoreboards and checkers, not the desired value.
  • Using UVM_BACKDOOR when no HDL backdoor path is defined — UVM cannot locate the signal hierarchy and issues a uvm_fatal.

Part B: Sequencer Protection Proxy

The memory controller agent has one sequencer. In a typical test, the virtual sequence starts a CPU configuration sequence and a DMA burst sequence concurrently. Without arbitration control, UVM's default round-robin policy interleaves their items. For single-beat transactions that is acceptable. For a 64-beat DMA burst it is fatal:

// BAD: two sequences compete for the same sequencer
fork
  dma_burst_seq.start(mem_ctrl_agent.sequencer);  // 64-beat burst
  cpu_config_seq.start(mem_ctrl_agent.sequencer);  // 4-beat config write
join
// Result: interleaved transactions corrupt both transfers

A CPU read can slip in between any two DMA beats. The burst loses atomicity, the memory controller sees an illegal access pattern, and the bus-protocol checker fires. Because the interleaving depends on simulation scheduling, the failure is non-deterministic: it may pass a thousand times before surfacing in a nightly regression.

The fix is to have the DMA sequence acquire exclusive access from the sequencer proxy before it starts sending items. lock() is the cooperative form: it waits until the currently-running sequence finishes its current item, then blocks all other sequences until the lock is released.

class dma_burst_seq extends uvm_sequence;
  task body();
    // Cooperative: wait for current sequence to finish, then get exclusive access
    sequencer.lock(this);

    repeat(64) begin
      `uvm_do(dma_txn)
    end

    sequencer.unlock(this);  // Release — MUST call or sequencer deadlocks
  endtask
endclass

The sequencer grants the lock, runs all 64 beats without interruption, then releases arbitration back to the normal pool. The CPU configuration sequence was waiting in the queue the whole time and resumes immediately after unlock(). The driver never saw any of this — it just processed a steady stream of items from get_next_item(). The Protection Proxy did its job.

lock() vs grab(): lock() is cooperative — it yields to the current item already in flight before taking control. grab() is preemptive — it immediately suspends whatever is running and seizes the arbitration slot. Choose based on urgency:

  • lock() — cooperative: waits until the currently-running sequence finishes its current item. Use when you need exclusivity but can tolerate waiting for the in-flight item to complete.
  • grab() — preemptive: immediately suspends whatever is running and takes control. Use when latency is critical and even one more item from another sequence is unacceptable.
  • Both require a matching unlock() / ungrab() — forgetting either causes permanent deadlock.

Common pitfalls with sequencer arbitration:

  • Forgetting unlock() / ungrab() — the sequencer is deadlocked forever; the simulation hangs with no error message pointing at the cause.
  • Using grab() when lock() is sufficient — preempting an in-flight item can cut a multi-beat transaction in half, producing the same corruption you were trying to prevent.
  • Calling lock() from outside a sequence body() — the API is only valid within a running sequence context; calling it from a test or component method produces undefined behavior.

Scaling Up: Composing Proxies and Layering Access Control

Individual proxies are useful. Composed proxies scale. This section covers four techniques for building larger, more sophisticated proxy hierarchies from the primitives introduced above.

Register Block Hierarchy as Layered Proxies

A uvm_reg_block can contain other uvm_reg_block instances as sub-blocks. Each block is a proxy for its register domain; the containing block is a proxy for the entire address space. The top-level SOC block delegates to per-IP sub-proxies, each of which carries its own map and its own address offset within the parent map.

class soc_reg_block extends uvm_reg_block;
  mem_ctrl_reg_block mem_ctrl;  // Proxy for memory controller registers
  pcie_cfg_reg_block pcie_cfg;  // Proxy for PCIe config registers

  `uvm_object_utils(soc_reg_block)

  virtual function void build();
    mem_ctrl = mem_ctrl_reg_block::type_id::create("mem_ctrl");
    mem_ctrl.configure(this);
    mem_ctrl.build();

    pcie_cfg = pcie_cfg_reg_block::type_id::create("pcie_cfg");
    pcie_cfg.configure(this);
    pcie_cfg.build();

    default_map = create_map("default_map", 'h0, 4, UVM_LITTLE_ENDIAN);
    default_map.add_submap(mem_ctrl.default_map, 'h1000_0000);
    default_map.add_submap(pcie_cfg.default_map, 'h2000_0000);
  endfunction
endclass

The SOC block is itself a proxy that delegates to sub-proxies. A test calling soc_regs.mem_ctrl.timing.write() never specifies an address — the map hierarchy resolves the final bus address by summing the register's offset within mem_ctrl, plus the sub-map offset ('h1000_0000), plus the top-level map base. Each sub-block is a self-contained proxy for its domain; the top-level block composes them into a single coherent address map.

uvm_reg_field Access Types as Built-In Protection Proxy Rules

The access type passed to uvm_reg_field::configure() is not documentation — it is a protection policy baked into the proxy at build time. The three most common types and what they enforce:

  • "RW" — full read/write access; the proxy delegates both operations to hardware without restriction.
  • "RO" — hardware owns the value; writes are silently ignored by the proxy and do not generate a bus transaction. The mirror is never updated on a write attempt.
  • "W1C" — write-1-to-clear; the proxy automatically applies the correct mask and semantics. Writing a 0 to a W1C field has no effect; writing a 1 clears the bit. The proxy tracks this automatically.

The protection proxy behavior is observable: writing to an RO field via reg.write() generates a uvm_warning from the register model, and no bus transaction reaches the driver. The proxy has enforced the policy before the call reaches the bus layer. These rules are not overridable at the call site — they are structural constraints defined once in the register model and applied uniformly everywhere that register is touched.

Sequencer Priority Arbitration — Beyond Lock/Grab

lock() and grab() are binary: one sequence has exclusive access, all others wait. Priority arbitration is softer: all sequences run, but preferred sequences are scheduled first when multiple compete for the next arbitration slot. The start() method accepts a priority argument as its third parameter:

// DMA runs before CPU when both have items ready, but both run
dma_seq.start(mem_ctrl_agent.sequencer, null, 200);  // higher priority
cpu_seq.start(mem_ctrl_agent.sequencer, null, 100);  // lower priority

When both sequences have an item ready, the sequencer grants the arbitration slot to the higher-priority sequence first. The lower-priority sequence is not starved — it runs whenever the higher-priority sequence has nothing queued. This is preference ordering, not exclusion. Use priority when you want preference without blocking: a monitoring sequence that should yield to stimulus sequences, or a low-priority background traffic generator that should not compete equally with the main test sequence. Use lock() or grab() when atomicity matters: a burst that cannot be interrupted regardless of what else is competing.

Proxy + Facade Composition

The Facade and Proxy patterns are not alternatives — they layer. The virtual sequence mem_subsystem_vseq from the Facade post calls reg.write() internally. The test calls configure_memory(timing_profile) and has no visibility into the fact that uvm_reg is involved at all. The Facade hides the Proxy.

The layering from test to silicon looks like this:

// Test sees only the Facade
mem_subsystem_vseq.configure_memory(FAST_TIMING_PROFILE);

// Inside configure_memory() — Facade calls the Proxy
reg_block.timing.t_ras.set(8'h04);
reg_block.timing.update(status, UVM_FRONTDOOR);  // Proxy takes over here

// Inside update() — Proxy delegates through the chain
// uvm_reg → uvm_reg_map → uvm_reg_adapter → bus sequence → driver → DUT

Facade simplifies what you call; Proxy controls what happens when you call it. The full call chain from the test's perspective collapses to a single named method. The full call chain from the bus's perspective is five layers deep. Both are true simultaneously, and neither interferes with the other.

Test → Facade (vseq) → Proxy (uvm_reg) → Adapter → Driver → DUT

Each layer has a single responsibility: the Facade exposes a domain-meaningful API, the Proxy enforces access policy and maintains the mirror, the Adapter translates register semantics to bus protocol, the Driver executes bus cycles. Composing these patterns does not increase complexity — it distributes responsibility so that each layer stays simple and each layer can be tested, replaced, or extended independently.

Advanced: Proxy + Factory

The Factory decides which register block implementation gets created. The Proxy controls how register access is delegated. Together, they let you swap register access behavior at test time — debug builds add readback verification, performance builds skip it — while the environment remains unchanged. The only difference between a debug regression and a performance regression is the factory override set in build_phase.

A note on UVM mechanics before the code: uvm_reg::write() is defined in the UVM library — you cannot cleanly override it on a per-block basis. The UVM-idiomatic approach is to expose a write_and_check() wrapper task on the register block. Debug tests call reg_block.write_and_check(reg, data) instead of reg.write(). This is still the Proxy pattern — the block is the proxy, and tests choose which proxy API to call.

class mem_ctrl_debug_reg_block extends mem_ctrl_reg_block;
  `uvm_object_utils(mem_ctrl_debug_reg_block)

  function new(string name = "mem_ctrl_debug_reg_block");
    super.new(name);
  endfunction

  // Write and immediately read back, comparing against the mirror.
  // Call this instead of reg.write() in debug tests.
  // Note: does not override uvm_reg::write() — call explicitly in tests.
  virtual task write_and_check(input  uvm_reg         rg,
                               input  uvm_reg_data_t  data,
                               output uvm_status_e    status);
    uvm_reg_data_t rdata;
    rg.write(status, data, UVM_FRONTDOOR, default_map);
    if (status != UVM_IS_OK) return;

    rg.read(status, rdata, UVM_FRONTDOOR, default_map);
    if (rdata !== rg.get_mirrored_value())
      `uvm_error("DBG_REG",
        $sformatf("Readback mismatch on %s: wrote %0h, read %0h",
          rg.get_name(), data, rdata))
    else
      `uvm_info("DBG_REG",
        $sformatf("Readback OK on %s: %0h", rg.get_name(), rdata), UVM_MEDIUM)
  endtask
endclass
class mem_ctrl_perf_reg_block extends mem_ctrl_reg_block;
  `uvm_object_utils(mem_ctrl_perf_reg_block)

  function new(string name = "mem_ctrl_perf_reg_block");
    super.new(name);
  endfunction
  // No overrides — inherits standard uvm_reg behavior.
  // Trusts the mirror; skips readback for maximum regression throughput.
endclass
class debug_test extends base_test;
  function void build_phase(uvm_phase phase);
    // Swap in debug register block before the environment builds
    mem_ctrl_reg_block::type_id::set_type_override(
      mem_ctrl_debug_reg_block::get_type());
    super.build_phase(phase);
  endfunction
endclass

class perf_regression_test extends base_test;
  function void build_phase(uvm_phase phase);
    mem_ctrl_reg_block::type_id::set_type_override(
      mem_ctrl_perf_reg_block::get_type());
    super.build_phase(phase);
  endfunction
endclass
// In debug_test's run_phase — calls write_and_check on the debug block
mem_ctrl_debug_reg_block debug_block;
$cast(debug_block, env.reg_block);

debug_block.write_and_check(debug_block.timing, 32'h08_04_04, status);
sequenceDiagram
    participant Test
    participant Factory
    participant Env
    participant DebugBlock as mem_ctrl_debug_reg_block
    participant Adapter
    participant Driver

    Test->>Factory: set_type_override(debug_reg_block)
    Test->>Env: build_phase()
    Env->>Factory: create("reg_block")
    Factory-->>Env: mem_ctrl_debug_reg_block instance
    Env->>DebugBlock: write_and_check(timing, data)
    DebugBlock->>Adapter: reg2bus(rw) [write]
    Adapter->>Driver: apb_txn (write)
    Driver-->>Adapter: response
    Adapter-->>DebugBlock: mirror updated
    DebugBlock->>Adapter: reg2bus(rw) [readback]
    Adapter->>Driver: apb_txn (read)
    Driver-->>Adapter: prdata
    Adapter-->>DebugBlock: bus2reg
    DebugBlock->>DebugBlock: compare rdata vs mirror

Factory (what to create), Adapter (interface translation), Decorator (behavior addition), Facade (subsystem simplification), Proxy (access control) — five patterns woven through one testbench. Each solves a different problem. None replaces another.

Quick Reference and Series Retrospective

Series Retrospective

Throughout this series we have built out a single memory subsystem testbench and deliberately reached for a different structural pattern at each layer. The result is not an academic exercise — it is the real shape of a well-factored UVM environment, and each pattern solved a distinct problem that none of the others could.

At the interface boundary, Adaptermem_apb_adapter — translates uvm_reg_bus_op to apb_txn and back. The register model speaks in generic register operations; the APB driver speaks in protocol transactions. The adapter bridges that mismatch without forcing either side to change its language. At the observation layer, Decorator subscribers attach to the memory monitor’s analysis port. Coverage collectors and protocol checkers add instrumentation without touching the monitor itself — behavior addition with zero modification. At the test layer, Facademem_subsystem_vseq — hides the choreography of CPU configuration sequences, DMA setup sequences, and memory controller timing behind a single high-level API. Engineers calling that virtual sequence do not need to know how many agents are involved. And threaded through every layer, Proxyuvm_reg and the UVM sequencer — mediates every register access through a delegation chain and arbitrates DMA versus CPU bus access through lock(). Access control, mirror maintenance, and predictable serialization all live in the proxy, invisible to callers.

Four patterns, four orthogonal concerns — none replaces another.

graph TB
    Test["Test / Virtual Sequence (Facade)"]
    RegModel["uvm_reg (Proxy)"]
    Sequencer["Sequencer (Protection Proxy)"]
    Adapter["mem_apb_adapter (Adapter)"]
    Monitor["Monitor"]
    Coverage["Coverage Subscriber (Decorator)"]
    Checker["Protocol Checker (Decorator)"]
    Driver["Driver"]
    DUT["DUT"]

    Test --> RegModel
    Test --> Sequencer
    RegModel --> Adapter
    Adapter --> Sequencer
    Sequencer --> Driver
    Driver --> DUT
    DUT --> Monitor
    Monitor --> Coverage
    Monitor --> Checker

Reference Tables

Table 1: GoF role → UVM mapping

GoF Role Register Proxy Sequencer Proxy
Subject Register with read/write Bus driver access
RealSubject Hardware register (DUT) UVM Driver
Proxy uvm_reg UVM Sequencer
Access control uvm_reg_field types (RW/RO/W1C) lock() / grab()

Table 2: Common mistakes

Mistake Fix
Forgetting lock_model() after build() Always call lock_model() as the last line of uvm_reg_block::build()
Not calling reg_block.reset() before test Call reset() in start_of_simulation_phase to initialize mirror values
Using get() instead of get_mirrored_value() in scoreboard get_mirrored_value() reflects hardware state; get() is the desired value
Forgetting unlock() after lock() Always pair lock() with unlock() — sequencer deadlocks permanently
Using grab() when lock() suffices grab() preempts in-flight items; use only when latency is critical
Creating reg block with new() instead of type_id::create() Use type_id::create() so Factory overrides work

Table 3: Definitive 4-pattern comparison

Pattern Intent Same interface as real object? Wraps UVM Concept
Adapter Translate incompatible interfaces No — translates to new interface One object uvm_reg_adapter
Decorator Add behavior without changing interface Yes — same interface, extended One object (chains) Analysis subscribers
Facade Simplify a complex subsystem No — new simplified interface Multiple objects Virtual sequences
Proxy Control access to an object Yes — identical interface One object uvm_reg, sequencer

Previous: Facade Pattern — Simplifying multi-agent complexity with virtual sequences

Next: Behavioral Patterns series — coming soon

Author
Milan Kubavat
Sharing knowledge about silicon verification, hardware design, and engineering insights.

Comments (0)

Leave a Comment