
By Dr. Joan Schumann
How well do we really understand MTSS—and more importantly, how well are we implementing?
Across California, MTSS language is everywhere. Teams reference Tier 1, Tier 2, and Tier 3. Schools collect screening data. Intervention blocks are built into master schedules and staffing plans. And yet, outcomes remain uneven—especially for students most at promise.
The issue is not effort. It is clarity.
Too often, MTSS is reduced to a set of structures—tiers, interventions, or schedules—rather than implemented as a coherent, evidence-based framework for decision making. Reclaiming its promise requires moving beyond terminology and focusing on the practices that actually improve outcomes.
From Referral to Prevention: Rethinking the Purpose of MTSS
A persistent misconception is that MTSS functions as a pre-referral process—a more formalized pathway to special education.
This interpretation fundamentally misses the point.
MTSS is a whole-system, prevention-oriented framework designed to ensure that every student receives the support they need—without reliance on teacher referral, parent advocacy, or classroom circumstance (McIntosh & Goodman, 2016). Historically, schools operated within a binary structure: general education and special education. Students who struggled were often required to fail before accessing additional support, with services tied to eligibility rather than need.
MTSS was designed to disrupt this model.
Through universal screening, progress monitoring, and structured team-based decision making, MTSS shifts schools from reactive identification to proactive support (Fuchs & Fuchs, 2006). Decisions are grounded in shared data, not individual perception. Entry into support is based on evidence, not advocacy.
This shift is not procedural—it is foundational to equity. When systems rely on informal referral processes, access to support is uneven and often influenced by bias, visibility, or family advocacy (Hosp & Reschly, 2002). MTSS replaces this variability with consistency, transparency, and collective responsibility.
Screening as Action, Not Compliance
Universal screening is one of the most powerful components of MTSS—but only when it is used with precision.
As Dr. Ken Howell has often noted, “Garbage in, garbage out.” The quality of decisions depends on the quality of the data informing them. Yet assessment selection is frequently driven by convenience or local norms rather than by evidence of reliability, validity, and instructional relevance.
Federal guidance emphasizes that screening tools must accurately identify risk and be sensitive to growth (National Center on Intensive Intervention, 2020). Resources such as NCII’s Academic and Behavior Tool Charts provide independent evaluations that support more rigorous decision making.
However, even high-quality assessments are insufficient if data are not actively used.
In many systems, screening has become a compliance exercise—administered, reported, and filed away without influencing instruction. Effective MTSS implementation requires structured routines in which teams analyze data, identify patterns, and take immediate action (Ikeda et al., 2016).
Critically, this work must occur within a culture of professional trust. When data are used evaluatively rather than formatively, they inhibit honest reflection and limit improvement.
Strong systems instead establish a clear cascade of responsibility: state, county, district, and site leaders align around shared data, while teams at each level use that data to drive decisions. Within schools, screening data should be used to evaluate Tier 1 effectiveness, inform flexible grouping, and monitor the impact of instructional changes.
If a majority of students are below benchmark, the response is not simply to increase intervention—it is to strengthen core instruction.
Screening is not about collecting data. It is about using data to adjust teaching and improve student outcomes.
Tiers as a System of Support—Not Labels
Another common misconception is that tiers represent categories of students.
They do not.
Tiers were designed to replace a limited, two-tiered system—general education and special education—with a continuum of support that responds more flexibly to student need (Sugai & Horner, 2002). Tier 1 represents high-quality core instruction for all students, while Tiers 2 and 3 reflect increasing levels of intensity delivered based on data and adjusted over time.
The purpose is not placement—it is responsiveness.
Yet in practice, this intent is often lost. Students are labeled as “Tier 2” or “Tier 3.” Supports become tied to programs, roles, or locations. Systems become static rather than dynamic—replicating the very structure MTSS was designed to replace.
This is where implementation breaks down.
Tier 1, in particular, is frequently misunderstood as “what we already do,” when in fact it is the primary driver of outcomes. Strong core instruction reduces the need for intensive intervention and improves outcomes for all learners (Hattie, 2009).
Designing Tier 1 requires intentionality. It must be aligned to standards, grounded in evidence-based practices, and responsive to data. It may be delivered by general educators, specialists, or coordinated teams—but it is not a program to be delegated. It is the school’s instructional system. Its effectiveness is measured by student outcomes, which must be collectively owned—and squarely led—by site leadership.
And it will look different across contexts.
A school with 73% of students meeting benchmark has a fundamentally different design challenge than a school where only 13% are proficient. In those schools, the issue is not identifying students for intervention—it is system capacity.
When demand exceeds supply, access to support becomes competitive. Educators advocate. Families push. Intervention becomes something students must gain entry to.
This is not a student problem. It is a system design problem.
MTSS was built to address this.
Rather than rationing intervention, effective systems expand capacity—strengthening Tier 1, increasing access to evidence-based supports, and deploying resources strategically (Fuchs & Fuchs, 2006). The question shifts from “Who qualifies?” to “How is the system responding to the level of need demonstrated by the data?”
This same principle applies across all tiers.
There is no universal configuration—only the expectation that systems are intentionally designed, continuously monitored, and adjusted based on outcomes. Students should move fluidly between levels of support. Support should increase or fade based on need—not labels.
This distinction is especially important in relation to special education.
Tier 3 is not synonymous with special education. MTSS provides support based on need; special education provides services based on eligibility and entitlement (Fuchs et al., 2010). A student may require intensive support without qualifying for special education, just as a student with an IEP may primarily receive Tier 1 instruction with accommodations and supplemental Tier 2 support.
The power of MTSS lies in its ability to provide immediate, responsive support—without delay, without gatekeeping, and without waiting for formal identification.
Support is not something students earn access to.
It is something the system is designed to deliver.
Designing for Responsiveness
At its core, MTSS is about building systems that respond effectively to student need.
Consider a student who has experienced interrupted or limited access to instruction. In a traditional system, significant academic gaps might trigger concern about disability. Within MTSS, the initial response is instructional: provide intensive, evidence-based intervention, monitor progress, and adjust based on response.
If the student demonstrates accelerated growth, the need is instructional. If progress remains limited despite strong intervention, further evaluation may be warranted.
This approach ensures that students receive support immediately while reducing the risk of misidentification (Burns & Gibbons, 2012).
MTSS, when implemented as designed, replaces assumptions with evidence and delays with action.
The Leadership Imperative
Reclaiming the promise of MTSS ultimately depends on leadership.
Research from the Wallace Foundation underscores that effective school leadership is second only to classroom instruction in its impact on student outcomes—and that leaders play a critical role in establishing the conditions for coherent, evidence-based systems (Wallace Foundation, 2021).
MTSS does not implement itself.
It requires leaders who can:
- Establish clarity about purpose and practice
- Align resources to student need
- Build systems for data-based decision making
- Foster cultures of trust and continuous improvement
Most importantly, it requires leaders who are willing to move beyond structure and focus on implementation.
California has invested significantly in MTSS. The next phase of this work is not about adding new initiatives, but about deepening understanding—ensuring that MTSS functions as the operating system that sustains effective practice and improves outcomes over time.
Because the promise of MTSS was never about tiers.
It was about building systems that work—for every student.
And that promise depends on leadership.
Joan Schumann, Ph.D., is the CEO of Leading for Learning and the founding executive director of the International MTSS Association, which is holding its Global Summit July 29-August 1 in San Francisco.
References
Burns, M. K., & Gibbons, K. A. (2012). Implementing response-to-intervention in elementary and secondary schools: Procedures to assure scientific-based practices. Routledge.
Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly, 41(1), 93–99.
Fuchs, D., Fuchs, L. S., & Stecker, P. M. (2010). The “blurring” of special education in a new continuum of general education placements and services. Exceptional Children, 76(3), 301–323.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.
Hosp, J. L., & Reschly, D. J. (2002). Regional differences in school psychology practice. School Psychology Review, 31(1), 11–29.
Ikeda, M. J., Tilly, W. D., Stumme, J., Volmer, L., & Allison, R. (2016). Agency-wide implementation of data-based decision making: Fidelity and student outcomes. Journal of Behavioral Education, 25(1), 50–64.
McIntosh, K., & Goodman, S. (2016). Integrated multi-tiered systems of support: Blending RTI and PBIS. Guilford Press.
National Center on Intensive Intervention. (2020). Academic and behavior screening tools charts. American Institutes for Research.
Sugai, G., & Horner, R. (2002). The evolution of discipline practices: School-wide PBIS. Child & Family Behavior Therapy, 24(1–2), 23–50.
The Wallace Foundation. (2021). How principals affect students and schools: A systematic synthesis of two decades of research.
































