Black Box vs. Glass House
Forensic schedule analysis determines who caused delays on construction projects and what those delays cost. The work typically involves parsing thousands of schedule activities, comparing planned timelines to what actually happened, cross-referencing project documentation, and building a defensible narrative about causation. This analysis often underpins multimillion-dollar claims and must withstand adversarial scrutiny in deposition, arbitration, and trial. Every conclusion must be traceable, explainable, and reproducible.
AI tools are now capable of performing meaningful portions of this work. Document classifiers, schedule data parsers, natural language processors, and large language models can handle tasks that were, until recently, performed entirely by human analysts. According to the Thomson Reuters 2025 Generative AI in Professional Services Report, active AI use among legal organizations nearly doubled in a single year, rising from 14% in 2024 to 26% in 2025. The construction disputes sector is following that trajectory.
Yet the governance has not kept pace. AACE International acknowledged this explicitly in Professional Guidance Document No. 02, stating that “there are no RPs for analytics methods at this time (e.g., machine learning and artificial intelligence)” while noting that “there are commercial applications on the market” (AACE 2024). Several adjacent professions have published AI governance frameworks. The Academy of Experts (2026), the Chartered Institute of Arbitrators (2025), the SVAMC (2024), and the AICPA (2025) have each addressed AI use by expert witnesses, arbitrators, or forensic professionals. All emphasize human accountability, non-delegation of decision-making, and risk-tiered oversight. None addresses the specific sequential dependencies of forensic schedule analysis.
The tools have arrived. The question facing our profession is whether we develop structured standards for AI use before or after the first successful challenge to an AI-assisted expert opinion in adversarial proceedings.
I believe the answer starts with a simple framework. Are you building a black box or a glass house?
The Black Box
A black box is any analytical process where data goes in, a conclusion comes out, and the reasoning in between is opaque.
In forensic schedule analysis, methodology is the foundation of defensibility. Whether an analyst performs a Time Impact Analysis, a Windows Analysis, or an As-Planned versus As-Built comparison under AACE RP 29R-03 (AACE 2021) or ASCE Standard 67-17 (ASCE 2017), the value of the work depends entirely on the ability to explain every step. How was the critical path identified? How were delays categorized? How was concurrency evaluated? How was each conclusion reached?
When an AI tool is given schedule data and asked to identify the cause of delay, it may produce a plausible-sounding answer. But if the analyst cannot trace the reasoning, verify the logic against the underlying data, and explain the methodology in professional or legal proceedings, that answer has no evidentiary foundation.
An analysis that cannot be explained cannot be defended.
Why the Black Box Is More Dangerous Than It Appears
Two specific AI behaviors make unsupervised use particularly dangerous in forensic work.
Hallucinations. AI models can generate analysis that references schedule activities, dates, logic relationships, and float values that do not exist in the actual schedule data. The output reads with confidence and specificity. It looks like real analysis. But the underlying facts may be fabricated. In a discipline where every factual claim must be traceable to a contemporaneous project record, this is not merely an error. It is the generation of unsupported assertions in a document that may be submitted as evidence. The risk is not that the AI gets an answer wrong. The risk is that it gets an answer wrong in a way that looks right, and that an analyst who is not checking every data point against the source material accepts it.
Irreproducibility. AI models can produce different output from the same input when run multiple times. Forensic analysis must be reproducible. The same data, methodology, and assumptions should yield the same conclusion. If an analyst cannot reproduce their own results, the analysis is vulnerable to a straightforward challenge. Run it again and show me you get the same answer.
These are not theoretical concerns. They are practical realities that every practitioner adopting AI tools must understand and address in their workflow.
The Glass House
A glass house is the opposite. Every step of the process is visible, auditable, and explainable.
The most practical way to understand how AI should function in forensic schedule work is an analogy practitioners will recognize. It is analogous to directing a junior analyst.
No experienced forensic scheduler hands a junior analyst a P6 export and says, “tell me who caused the delay.” Instead, they assign specific, bounded tasks. Parse this schedule and identify the critical path as of the data date. Compare these two updates and flag activities where actual durations exceeded planned by a defined threshold. Cross-reference these daily logs against critical path activities for a specific period. Draft a factual narrative for a specific delay event based on a defined set of records.
The senior analyst reviews the output, checks it against the source data, applies professional judgment on causation and concurrency, and takes full responsibility for the final work product.
AI should operate within a similar framework. Performing clearly defined tasks under the direction of an expert who reviews, validates, and owns every conclusion.
Not All Tasks Carry the Same Oversight Requirements
The key to responsible AI use in forensic schedule analysis is recognizing that different tasks in the analytical workflow require fundamentally different modes of oversight.
Some tasks operate entirely within the factual record of the case. Parsing schedules, extracting dates from documents, drafting chronologies from timestamped events, computing float values. These are tasks where AI can lead the work, provided the expert validates the output before it feeds the next stage of analysis.
Other tasks require professional knowledge that goes beyond the case record. Resolving ambiguous baseline candidates, distinguishing genuine delay from float management, selecting the appropriate analytical methodology, classifying delays by responsible party in novel or ambiguous circumstances. These are tasks where the expert must make the determination, even if AI assists in assembling the information.
And some tasks carry a requirement that no AI advancement will change. Courts, tribunals, arbitrators, insurers, and opposing counsel require a named human expert to independently substantiate and defend the conclusions. This accountability requirement is grounded in the structure of dispute resolution itself, not in the current limitations of AI technology. It holds regardless of how capable AI becomes.
A structured approach to AI integration recognizes these distinctions rather than treating all tasks identically.
The Pipeline Problem
This is where forensic schedule analysis diverges from most other professional AI use cases, and where existing governance frameworks fall short.
Forensic schedule analysis is a sequentially dependent workflow. Documents are classified and organized. A project chronology is assembled. Schedules are parsed. Baselines and updates are identified. Analysis windows are defined. Critical paths are tracked. Delays are identified, classified, and quantified. An expert report is written. The expert defends the work in testimony. Each step’s output feeds the next step’s input.
An error upstream propagates downstream with compounding consequence. A project document misclassified at the first stage enters the chronology incorrectly. The chronology distorts the delay identification. The delay identification feeds the expert’s classification of responsible party. By the time the error surfaces in the expert opinion, a simple classification error has become a material misstatement that may not be traceable without a comprehensive audit trail.
Consider a practical example. A 3,000-activity CPM schedule with 14 monthly updates over the course of a delayed project. Manually comparing critical path progression across all updates could require 40 or more hours of analyst time before the interpretive work even begins. AI-assisted processing could reduce the data assembly to a fraction of that time. But the expert’s evaluation of what those critical path shifts mean, whether they represent excusable or compensable delay, how concurrent events interact, whether the contractor’s own performance contributed, remains a professional judgment that requires the same rigor regardless of how the underlying data was assembled.
The efficiency gain is in the processing. The value is in the judgment. These must not be confused, and this distinction is one our profession will need to articulate clearly as clients and attorneys begin asking why analysis costs remain substantial when AI can accelerate portions of the workflow.
This also means that oversight requirements cannot be based solely on the apparent complexity of a given task. They must account for where the task sits in the pipeline and what happens downstream when the output is wrong. A governance framework that treats each task in isolation, without addressing how errors cascade through a sequential analytical workflow, is structurally incomplete for this discipline.
One additional consideration that practitioners should not overlook is data governance. Forensic schedule analysis involves processing entire case files through multi-stage workflows over months, often under protective orders that restrict third-party disclosure. The confidentiality architecture supporting any AI-assisted workflow must be commensurate with the sensitivity of the case material.
Build Glass Houses
AI is the most significant analytical tool to enter forensic schedule analysis in years. Its adoption will likely accelerate. The practitioners and firms that develop transparent, structured standards for its use will have a meaningful advantage, not just in efficiency but in defensibility.
The fundamentals of this profession have not changed. Methodology must be transparent. Analysis must be reproducible. Conclusions must be traceable to the contemporaneous record. And the expert must be able to explain and defend every element of their work.
That transparency extends to disclosure. When AI tools contribute to the analytical workflow, that contribution should be documented and, where appropriate, disclosed to the tribunal and opposing parties.
Make every step visible. To your client, to the tribunal, and to anyone who wants to look.
Build glass houses.