S8E05 - Assessing Need Or Navigating Policy? The Reality of OT Practice in the NDIS
When assessment tools and NDIS policy don’t line up
As NDIS reforms gather pace, OTs are being pulled in two directions at once. On one side, we’re being told to simplify, reduce “extra” evidence and focus on structured assessment pathways. On the other, the tools we’re expected to use still rely on clinical reasoning, collateral information and transparent justification to be considered valid.
That tension is not theoretical. It’s playing out in everyday report writing, accreditation processes and plan reviews across the country.
The I-CAN reality check many clinicians aren’t expecting
The first reality is the clear expectation that conclusions are supported by explicit reasoning – particularly around frequency and intensity of support. It’s not enough to state what is required; the rationale must be transparent, functionally linked and grounded in everyday impact.
The second is the requirement to reference and integrate other sources of evidence – including relevant professional reports, clinical documentation and collateral information. This reflects sound clinical practice and strengthens the integrity of the assessment.
What’s interesting is not that these elements are required – they absolutely should be in a valid and reliable assessment framework. The question is what happens if the tool is later used within a planning system that does not consistently allow for, consider or weight external evidence in the same way.
This isn’t a criticism of I-CAN. It’s a reflection on alignment. If a tool’s validity depends on multi-source evidence and clinical reasoning, then any system using that tool needs to preserve those same inputs to maintain its integrity.
Preparing for support needs assessments without losing OT rigour
With support needs assessments expected to play a larger role in planning, OTs cannot afford to let their clinical standards slip.
Structured tools do not replace professional reasoning. They require it.
Reports still need to clearly connect assessment findings to functional participation. Frequency and intensity must be justified in terms of real-world support needs, not abstract scoring. Recommendations must logically flow from evidence that has been interpreted, not just presented.
This is the moment to simplify templates, not inflate them. Clarity, consistency and defensible reasoning will carry more weight than volume.
Standardised assessments are being used, but not always in a clinically sound way
The conversation around standardised assessments has become increasingly polarised. Some clinicians feel pressure to include more tools to “prove” their case. Others are pulling back, concerned that raw scores will be misread or used in isolation.
The problem isn’t the tools themselves. It’s how they are being used.
Reports overloaded with eight or nine standardised measures rarely add clarity. They often overwhelm the reader and increase the risk of inconsistency. At the same time, simply listing scores without interpretation leaves critical gaps.
A balanced approach works best. Use a small number of well-chosen measures. Provide a concise, plain-language interpretation. Then embed that meaning within the broader functional narrative. A short, well-written paragraph can often achieve more than pages of technical explanation.
Why attaching full assessment forms can backfire
Including full assessment printouts or item-level responses might feel protective, but it can create unintended consequences.
Standardised tools are designed to be interpreted as complete measures, not as isolated questions. When item responses are visible, there is a risk that someone without appropriate training may focus on a single answer and draw inaccurate conclusions.
A safer approach is to include summary scores and your functional interpretation, then clearly link those findings to observed performance and reported impact. If a result does not align with your broader evidence, it is better to explain the discrepancy or exclude the tool than to include data you cannot confidently defend.
A profession shaped by the NDIS, for better and worse
Many early career OTs have only practised within the NDIS environment. That inevitably shapes how assessment, goal setting and documentation are understood.
Without exposure to broader service systems, it can feel normal to build reports primarily around funding logic rather than functional progression. In paediatrics particularly, understanding everyday developmental expectations requires experience that extends beyond templates and scoring systems.
This context helps explain why some clinicians lean heavily on standardised tools. When clinical confidence is still developing, volume can feel safer than judgement. Strong supervision, shared frameworks and consistent practice models are essential to counterbalance that pressure.
What the NDIA’s latest quarterly reporting is signalling
Recent NDIA quarterly reporting highlights tightening access pathways and ongoing pressure points within the Scheme. The NDIA publishes quarterly reports outlining participant growth, access trends and performance against the Participant Service Guarantee.
Of particular note is the reduced access rate for people with psychosocial disability, a trend that has drawn concern from external advocacy groups.
At the same time, plan reassessment timeframes and review processes remain areas of significant participant frustration. These system-level signals matter because they shape the context in which OT reports are read, reviewed and, at times, challenged.
Appeals and external review are changing too
External review processes continue to evolve, including the role of the Administrative Review Tribunal in NDIS matters. Media reporting has also highlighted concerns about the adversarial nature of appeals and the burden placed on families navigating complex decisions.
For OTs, this reinforces one key point: your report may be read by decision-makers without clinical training and within constrained frameworks. Clarity, structure and accessible reasoning are not optional extras. They are safeguards.
Key takeaways for OTs
- Use a small number of well-chosen standardised assessments and interpret them in plain language linked to function.
- Clearly justify frequency and intensity by anchoring recommendations in day-to-day support needs.
- Avoid attaching item-level assessment responses that may be misinterpreted in isolation.
- Draw on collateral information where clinically relevant to strengthen defensibility.
- Focus on clarity and consistency in report structure rather than increasing volume.
Link
Occupational Therapy Society for Hidden and Invisible Disabilities (OTSi): Discussion Paper - NDIS New Framework Planning
Join us for Adelaide Friends of the Pod Networking Event: https://www.trybooking.com/events/landing/1543542