Why Most Organizations Don't Know What Anything Actually Costs
Ask the executive director of a $50M nonprofit what it costs to deliver their flagship program. You'll get a number. Ask how that number was calculated, and the conversation falls apart within ninety seconds.
The number came from the budget. The budget came from last year's budget, with some adjustments. Last year's budget came from the year before. Somewhere five or six years back, someone built the original cost model, and nobody has touched the assumptions since. The program has grown. The team structure has changed. Shared services have expanded. Technology costs have tripled. The original allocation methodology no longer reflects how the organization operates. But the number on the budget line still gets called "program cost," and decisions get made on it as if it were real.
This isn't a small problem. It's the foundation underneath every major financial decision your organization makes, and in most cases that foundation is fiction.
Most leaders assume they know what their programs cost because they have budgets, financial reports, and a finance team that produces numbers on demand. The assumption is wrong. Having a number is not the same as knowing the cost. The number is the output of an allocation methodology that was probably built years ago for a different version of the organization, and that methodology determines whether the number reflects reality or distorts it. Most methodologies distort it. Quietly, consistently, and in ways that compound.
Here's what's actually happening underneath the reports. Direct costs get assigned to programs reasonably well, because they're traceable. The salary of the program director, the materials, the travel. That's the easy part. The hard part is everything else. The CEO's time. The CFO's time. The accounting team. IT. HR. Facilities. Insurance. Compliance. Legal. Board governance. Fundraising infrastructure. All of it has to land somewhere, and the methodology that decides where it lands determines whether your program costs are accurate or imaginary.
Most organizations use square footage, headcount, or revenue percentage to allocate indirect costs. These methods are easy to administer and almost always wrong. A program with five employees and a small office might consume forty percent of the CFO's time because it has the most complex funding requirements. A program with twenty employees and a large footprint might consume almost none of the IT budget because it runs on a single legacy system. Allocating by headcount or square footage produces a number that's mathematically clean and operationally meaningless. Then leadership compares programs to each other, makes investment decisions, sets pricing, negotiates funding, and recovers indirect costs from grants, all on a comparison that has no basis in reality.
The financial damage shows up in three places, and most organizations don't connect them to the same root cause. First, programs that look profitable are subsidizing programs that look unprofitable, because the allocation is wrong. Leadership starves the wrong programs and over-invests in the wrong ones. Second, indirect cost recovery from federal and state grants is consistently understated, because the rate is built on cost data that doesn't reflect actual consumption. Organizations leave hundreds of thousands of dollars per year on the table and call it compliance. Third, when the organization has to defend its costs, whether to a funder, an auditor, or a board, the methodology can't withstand scrutiny. The numbers fall apart the moment someone asks how they were derived.
The deeper issue is that most finance teams aren't trained to question cost methodology. They're trained to apply it consistently. Consistency is the goal. So the same flawed methodology gets applied year after year, the books balance, the audit passes, and the organization keeps making decisions on data that's been wrong since the methodology was built. Nobody is doing anything wrong by the standards of their job. The standards of the job don't include checking whether the methodology still works.
What it takes to know what things actually cost is a real cost study. Not an allocation refresh, not a budget rebuild, but a structural examination of how shared resources are actually consumed by programs, departments, and funding streams. That study almost always reveals the same patterns. Some programs are dramatically more expensive than the budget suggests. Some are dramatically less. The indirect cost rate is usually understated. The pricing on fee-for-service work is usually wrong. The case for funding from federal sources is usually weaker than it could be. None of this is visible from the financial reports, because the reports are built on the methodology that's producing the distortion.
When you don't know what things cost, you don't know what to charge, what to fund, what to expand, what to cut, or what to recover. Every strategic decision is a guess dressed up in a spreadsheet. The organizations that operate with real cost intelligence make sharper decisions, recover more revenue, and defend their numbers when challenged. The organizations that don't keep operating on the budget they inherited and wondering why the financials never quite tell them what they need to know.
This is what we identify and fix in the Strategic Assessment.