Episode 104: Providing Guidance on Information Systems Quality Improvement
Welcome to The Bare Metal Cyber CISA Prepcast. This series helps you prepare for the exam with focused explanations and practical context.
High-quality information systems are essential for organizational performance, security, and resilience. When systems meet their requirements, perform reliably, and are easy to maintain, organizations reduce the likelihood of errors, downtime, and user dissatisfaction. Conversely, when quality is low, operational disruptions increase, technical debt accumulates, and security risks expand. Quality affects everything from functionality and speed to data integrity and user trust. For auditors, evaluating system quality is not about writing code—it is about identifying the conditions and controls that support reliable and secure development, maintenance, and support. The CISA exam frequently includes topics related to system development life cycle controls, testing, process improvement, and quality assurance integration. Auditors play a vital role in ensuring systems meet business expectations and control requirements throughout their lifecycle.
System quality can be broken down into several key dimensions. Functionality refers to whether the system meets its intended requirements and performs as expected. Reliability refers to system stability and uptime—whether the system runs without crashing, corrupting data, or losing transactions. Usability considers how easily users can interact with the system, including interface design, documentation, and accessibility. Efficiency relates to resource utilization, such as how quickly the system processes transactions and how it manages memory or storage. Maintainability includes how easily the system can be updated, patched, enhanced, or scaled to meet new needs. Auditors must consider these dimensions when reviewing control effectiveness and identifying process gaps. CISA candidates should understand how each of these dimensions affects both technical risk and organizational outcomes.
Auditors contribute to system quality by providing independent evaluations of processes, controls, and practices. They do not write software or manage development teams, but they do identify deficiencies in areas such as requirements validation, testing coverage, change control, and documentation. Auditors recommend enhancements such as implementing code review procedures, automating tests, and integrating quality gates into development pipelines. They also help drive root cause analysis when issues occur repeatedly, helping organizations focus on systemic improvement rather than surface-level fixes. By promoting secure development principles and quality assurance best practices, auditors help teams avoid the kinds of errors that lead to security incidents, noncompliance, or poor performance. In this role, auditors serve as independent advisors—bringing visibility and structure to what is often a fast-moving and fragmented process.
To evaluate system quality, auditors must first identify key inputs. These include system development life cycle documentation such as requirements statements, design specifications, test plans, and release documentation. Incident logs, defect trackers, and downtime reports help identify where quality issues have emerged during or after deployment. User feedback, gathered from helpdesk tickets or surveys, offers insight into usability and responsiveness. Change management records and post-implementation reviews provide additional context into how well new systems or updates are planned, tested, and supported. Auditors analyze these data sources to identify patterns of error, gaps in control coverage, or missed opportunities for process improvement. On the CISA exam, candidates should be prepared to recognize how each of these inputs contributes to understanding and improving overall system quality.
Quality assurance is most effective when integrated into every stage of the system lifecycle. Auditors review whether testing includes all necessary types—unit testing for individual components, integration testing for interaction between modules, system testing for end-to-end functionality, and user acceptance testing to confirm that user needs are met. Automated test coverage should be reviewed to ensure repeatability and speed, and defect tracking systems should be used to identify and manage all known issues through closure. Test cases must align with both functional and control objectives, ensuring that systems not only work but also meet security, compliance, and performance expectations. Test signoffs and stakeholder approvals must be documented and verifiable. CISA candidates may be asked to identify scenarios where lack of test documentation, coverage, or enforcement led to a system failure or control violation.
Change management and post-implementation reviews are essential for closing the loop on quality. Every system change—whether new development, configuration update, or bug fix—must follow a defined process that includes planning, approval, testing, and rollback capability. After deployment, a post-implementation review should be conducted to evaluate whether the change met its objectives and whether any new issues were introduced. Stakeholders should be involved in assessing the outcome and identifying lessons learned. Defects, user feedback, and operational metrics must be reviewed as part of this process. Auditors evaluate whether PIRs are conducted consistently, whether issues are tracked to closure, and whether insights are incorporated into future planning. The CISA exam may include questions on how to assess the effectiveness of post-change reviews and how those reviews contribute to long-term quality improvement.
Metrics help translate system quality into measurable outcomes. Common quality metrics include defect density, which tracks the number of bugs relative to code size or complexity; defect leakage, which measures how many issues escape early detection and surface later in production; and mean time to resolution, which tracks how quickly problems are addressed once detected. Other metrics include system availability, transaction error rates, SLA compliance, and user satisfaction scores. Tools such as static analysis platforms or code quality dashboards help teams monitor technical debt and adherence to coding standards. Auditors review whether metrics are defined, collected regularly, and used to drive improvement decisions. On the CISA exam, candidates should know how to interpret metrics and evaluate whether they are integrated into governance and reporting processes.
Process maturity and standardization are foundational to repeatable quality. Organizations should compare their development and maintenance practices to recognized standards such as the Capability Maturity Model Integration or the ISO/IEC 25010 quality model. Development should follow structured coding standards, with templates for documentation and procedures for version control. Peer code reviews, automated testing pipelines, and continuous integration practices support early detection of issues. Incorporating security reviews into DevOps workflows—known as DevSecOps—helps ensure that security and quality are considered from the start. Auditors evaluate whether teams follow consistent, documented processes and whether those processes support scalability and compliance. CISA candidates should understand how to assess process maturity and how lack of standardization leads to inconsistent results and increased risk.
Training and knowledge sharing are key enablers of system quality. Developers and IT staff must receive training on secure coding, testing techniques, and quality management tools. Business analysts and stakeholders need guidance on writing effective requirements and reviewing deliverables. Organizations should maintain internal knowledge bases, code repositories, and project wikis to support onboarding and team collaboration. Auditors can recommend training improvements based on observed errors, repeat incidents, or control failures. They also assess whether training is up to date, relevant to specific roles, and validated through testing or performance metrics. On the exam, CISA candidates may be asked to evaluate how training gaps contribute to defects or how knowledge sharing supports quality assurance.
For CISA candidates, evaluating system quality means reviewing how systems are designed, built, tested, and maintained—not just whether they function today, but whether the processes behind them are resilient and governed. You must assess testing coverage, development controls, defect tracking, and the maturity of improvement practices. Expect questions on metrics, SDLC documentation, post-implementation reviews, and auditor roles in quality enhancement. Auditors do not fix code—but they help organizations understand the conditions that lead to quality issues and identify where controls, training, or processes must improve. Quality improvement is not about perfection—it is about continuous progress toward systems that are functional, secure, and aligned with user and business needs.
Thanks for joining us for this episode of The Bare Metal Cyber CISA Prepcast. For more episodes, tools, and study support, visit us at Baremetalcyber.com.
