What the Audit and Accountability Domain Is For
The Audit and Accountability domain is the evidence layer of the CMMC Level 2 framework. Its nine requirements define what gets logged, who is accountable for the logged actions, how the logs are reviewed and protected, and how the integrity of the audit trail is maintained over time. Every operational claim a contractor makes in the System Security Plan ultimately reduces to an audit record question. Who performed the action, when, on which system, and with what outcome. If the audit trail cannot answer those questions reliably, the operational claim cannot be substantiated.
For defense contractors approaching assessment, AU is also the domain where the gap between compliance documentation and operational reality is most frequently exposed. A policy that says the organization reviews audit logs is not the same as evidence that the review took place. A tool that generates logs is not the same as a tool that retains, correlates, and protects them. Assessors are trained to look past the stated capability and ask what the evidence actually shows.
The domain interacts tightly with nearly every other domain in the standard. Access control decisions become audit records. Authentication events become audit records. Configuration changes become audit records. Incident response and system integrity monitoring both consume audit output as primary input. When AU is weak, the weakness propagates across the assessment. When AU is strong, the rest of the assessment has a defensible foundation to rest on.
The Structure of the 9 Controls
The nine AU requirements organize into three functional clusters that reflect the lifecycle of an audit record.
The first cluster covers audit generation. These three controls (3.3.1, 3.3.2, and 3.3.7) establish what events are logged, whose actions are recorded, and whether the timestamps on those records are reliable. The generation cluster determines the quality of every downstream activity in the domain. If events are not captured, or if the actor attribution is unreliable, or if the timestamps cannot be trusted, nothing else in AU can compensate.
The second cluster covers review, correlation, and response. These four controls (3.3.3, 3.3.4, 3.3.5, and 3.3.6) govern what happens to audit records after they are generated. Logs must be reviewed, failures in the logging process must produce alerts, records must be correlated across systems to support investigation, and the audit corpus must be queryable for reporting. This cluster is where the operational discipline of the program becomes visible.
The third cluster covers audit protection and management. These two controls (3.3.8 and 3.3.9) address the integrity of the audit infrastructure itself. Audit records and audit tools must be protected from unauthorized access or modification, and the management of audit functions must be restricted to a subset of privileged users. This cluster exists because an audit trail that can be silently modified by the same people it records is not an audit trail.
Audit Generation
The first cluster establishes the raw material of the domain. What events are logged, whose actions are recorded, and whether the timestamps are trustworthy. If the generation cluster is incomplete, every other control in AU is working against incomplete input.
AU.L2-3.3.1System Auditing
The organization must create and retain system audit logs and records to the extent needed to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized activity. The control has two assessable components. The first is the definition of what events the organization has determined must be logged, which is an organizational decision that must be documented with rationale. The second is the demonstration that those events are actually captured and retained. Assessors frequently find that the event list is comprehensive on paper while the actual log output is either incomplete or configured inconsistently across systems.
View the AU.3.3.1 reference card →AU.L2-3.3.2User Accountability
The actions of individual system users must be uniquely traceable to those users. This is the accountability backbone of the audit trail. Shared accounts, service accounts without ownership, and generic administrator credentials all undermine the traceability this control requires. The upstream dependency is Access Control, because an audit record is only as reliable as the identity it references. If AC permits shared or anonymous access, AU cannot produce the traceable record this control requires.
View the AU.3.3.2 reference card →AU.L2-3.3.7Authoritative Time Source
System clocks must be synchronized with an authoritative time source so that audit records carry reliable timestamps. This control sits in the generation cluster because a timestamp that cannot be trusted undermines the entire audit trail. The implementation is typically NTP or equivalent, drawing from a designated authoritative source, with monitoring that detects drift or sync failure. Findings in this area often involve systems that are configured to sync but where the sync relationship has failed silently, leaving timestamps that diverge from the authoritative reference.
View the AU.3.3.7 reference card →Review, Correlation, and Response
The second cluster determines whether the generated audit records actually become useful evidence. Generation without review is storage. Review without correlation is observation of symptoms rather than patterns. Correlation without a reporting capability cannot produce the output that investigation requires.
AU.L2-3.3.3Event Review
Logged events must be reviewed and updated. The control carries two obligations. The first is that the review actually happens on a documented cadence, with evidence that events are examined rather than accumulated. The second is that the list of events being logged is updated over time based on what the review finds useful, what threats have emerged, and what operational changes have shifted the risk picture. An audit program that never updates its event selection is an audit program that does not learn.
View the AU.3.3.3 reference card →AU.L2-3.3.4Audit Failure Alerting
The system must alert in the event of an audit logging process failure. The concern is that the audit chain can break silently. A log destination fills, a forwarding agent stops, a collection endpoint becomes unreachable, and logs stop arriving. Without alerting, the organization does not discover the gap until an investigation reveals that the relevant records do not exist. The implementation must cover not just the central logging infrastructure but also the individual systems whose logs feed it, because failures at either end produce the same effect.
View the AU.3.3.4 reference card →AU.L2-3.3.5Audit Correlation
Audit record review, analysis, and reporting processes must be correlated for investigation and response. This control requires that audit records from multiple systems be brought together in a way that supports cross-system investigation. A user action that begins on a workstation, moves through a network device, and terminates at a file server must be reconstructible from the audit trail. Siloed per-system logs without correlation leave the organization unable to follow activity across boundaries.
View the AU.3.3.5 reference card →AU.L2-3.3.6Reduction and Reporting
The system must provide audit record reduction and report generation to support on-demand analysis and reporting. Reduction is the ability to filter, group, and summarize audit records so that relevant patterns can be extracted from the larger corpus. Reporting is the ability to produce output that investigators, incident responders, and assessors can use. This control is frequently satisfied by a SIEM or similar platform, but the control is about capability, not about a specific product category. The assessable question is whether on-demand analysis is possible, not whether a particular tool is in place.
View the AU.3.3.6 reference card →Audit Protection and Management
The third cluster protects the audit trail from the people and processes it records. An audit record that can be silently modified by an administrator is not an audit record, and the management of audit functions must be restricted to a subset of users who do not hold broad system privileges elsewhere.
AU.L2-3.3.8Audit Protection
Audit information and audit logging tools must be protected from unauthorized access, modification, and deletion. The concern is that a compromised account with broad system privileges could alter or destroy the audit trail that would otherwise reveal the compromise. Protection typically involves a combination of technical controls such as write-once storage or remote forwarding to an administrative boundary the source system cannot reach, along with access controls that prevent routine administrators from reaching audit storage. The protection also extends to the audit tools themselves. A SIEM that can be reconfigured by any administrator does not satisfy the protection obligation for the records it holds.
View the AU.3.3.8 reference card →AU.L2-3.3.9Audit Management
Management of audit logging functionality must be limited to a subset of privileged users. This control addresses the separation-of-duties concern that an administrator should not be able to modify the audit trail of their own activity. The practical implementation restricts audit configuration, retention policy, and forwarding rules to dedicated audit administrators who do not hold broad system administration privileges elsewhere. The control works in tandem with 3.3.8, where protection addresses the records and tools and management addresses the people who can configure them.
View the AU.3.3.9 reference card →Where Audit and Accountability Intersects with Other Domains
The audit trail is the common evidence source for several other domains, and its quality determines how those domains can be assessed.
Access Control is the upstream source of the actor identity that every audit record depends on. When AC permits shared or anonymous access, the audit records produced by those accounts cannot satisfy the accountability obligation in 3.3.2. A weak AC implementation creates AU findings that cannot be remediated within the AU domain alone.
Identification and Authentication produces the authentication events that are among the most frequently reviewed audit records. Failed logons, privilege elevations, and credential lifecycle events are all IA activities that AU captures and reviews. The two domains function together as the identity and accountability layer of the control framework.
Incident Response consumes audit output as its primary investigative resource. When an incident requires reconstruction of events, the AU correlation and reporting capabilities in 3.3.5 and 3.3.6 are what make that reconstruction possible. An IR program operating against a weak AU foundation cannot produce defensible timelines.
System and Information Integrity includes monitoring obligations that depend on audit records as input. SI.3.14.7 on identifying unauthorized use is particularly dependent on a well-functioning audit trail, because the detection capability the control requires must reference the authorized baseline and the actual behavior, both of which surface through audit records.
Configuration Management produces change events that AU captures. A configuration change that is not audited cannot be distinguished from an unauthorized modification, and the audit record is what makes the change traceable to the person who approved and implemented it.
Common Implementation Pitfalls
Several patterns come up repeatedly in Audit and Accountability readiness work.
Logs generated but not reviewed. The most common AU finding is that the logging infrastructure is in place, the events are being captured, and no one is actually reviewing them on a documented cadence. The 3.3.3 requirement is not satisfied by the existence of reviewable logs. It is satisfied by evidence that review occurs, that findings are acted on, and that the event selection evolves based on what the review reveals.
Shared accounts that break the accountability chain. Local administrator accounts, service accounts without ownership, and built-in system accounts all produce audit records that cannot be traced to an individual. The 3.3.2 obligation cannot be satisfied when the actor in the audit record is a shared identity. Remediation requires upstream work in Access Control to eliminate shared access, not just configuration work within AU.
Audit failure alerts that go to an unmonitored destination. The 3.3.4 requirement is frequently configured to send alerts to an email distribution list, a log aggregation dashboard, or an inbox that no one watches. The alert capability exists on paper. The human response does not. An alert that no one reads is not an alert.
Time sync configured but not monitored. Systems are configured to sync with an authoritative time source, but the sync relationship fails silently and no one notices until an assessment reveals divergent timestamps across systems. Monitoring the sync relationship, and alerting on failure, is part of the 3.3.7 implementation.
Audit correlation that stops at the SIEM boundary. Correlation works for systems that forward to the central platform and fails for systems that do not. Specialized assets, legacy systems, and cloud services with inconsistent forwarding configurations leave gaps in the correlated view that the 3.3.5 requirement does not tolerate. The correlated audit picture must be complete across the assessment scope, not just across the systems that forward cleanly.
Audit protection that is weaker than the access it needs to contain. If a domain administrator or equivalent can reach audit storage, alter audit configuration, or delete audit records, the protection obligation in 3.3.8 is not satisfied. The implementation needs to separate audit administration from general system administration, even when the same person holds both roles in practice.
Reporting capability that has never been tested. The 3.3.6 requirement for on-demand analysis is often satisfied by the presence of a reporting tool that has never been used to produce an actual report under operational conditions. Assessors may ask for a demonstration, and a capability that has not been exercised tends to reveal gaps under demonstration that were not visible in the configuration review.
Where to Start
For an organization new to the AU domain, the first work is defining what gets logged and why.
The foundational deliverable is the event selection document. It names the events the organization has determined must be captured, explains why each event is included, and maps the events to the systems that produce them. Without this document, the 3.3.1 obligation cannot be satisfied and every downstream control in AU operates against an undefined baseline.
The second deliverable is the review cadence. A written description of how audit records are reviewed, on what schedule, by whom, and with what follow-through when findings emerge. The review description is the evidence for 3.3.3 and the operational foundation for 3.3.5 correlation work.
The third deliverable is the audit protection architecture. How audit records are stored, who can reach them, how they are protected from modification, and who is authorized to manage the audit functions. This is the evidence for 3.3.8 and 3.3.9, and it is often the area where a contractor's existing infrastructure needs the most significant adjustment.
With the event selection, review cadence, and protection architecture defined, the remaining AU controls become implementation work against a documented baseline. The File Hashing white paper in the resource library addresses the integrity side of audit artifacts and is useful reading for the 3.3.8 protection work in particular.