These conditions are present in the majority of AI projects, and they're present at scoping. The project is approved with assumptions about operational documentation, data availability, process clarity, decision authority, integration scope, and measurement infrastructure that aren't being tested before approval. The testing happens during implementation, and by then, the project has commitments, deadlines, and visibility that make graceful course correction nearly impossible. The project either delivers a compromised version of the original vision, gets significantly extended and over-budget, or quietly fails and gets repositioned as something different than what was originally promised.