When I became a business unit CFO of a large bank, my team made early attempts to integrate relationship economics. We had the pieces in place. Reporting had improved, pricing had tightened, Asset Liability Committee (ALCO) discussions were sharper. Yet behavior would still drift when reinforcement wasn’t predictable.
Good quarters would pass and the edges would soften, not dramatically, just enough to notice the next review wasn’t quite as tight as the last one. The lesson was straightforward. Discipline has to be scheduled because announced intentions don’t survive contact with a busy calendar.
The first stage was quarterly stakeholder readouts where executive management reviewed progress to plan, pricing behavior, relationship depth, balance sheet movement, and emerging risk patterns.
The tone was diagnostic. We weren’t celebrating or defending; we were trying to figure out which banking season was dominant and what it meant for the next 90 days. Was the underperformance a training gap? An operating bottleneck? A product capability issue? Misaligned incentives? The readout identified the pressure point, and the subsequent stages went after the root cause.
From there, priorities flowed into account planning, where portfolio observation had to translate into relationship-level action. Underperforming or maturing relationships were flagged, and teams presented plans grounded in both client need and economic contribution.
In our first year applying this discipline, some long-standing relationships looked different under integrated review. I remember a few of those conversations being genuinely tense, not because anyone was questioning the client, but because the capital allocation picture had changed once we stopped evaluating deals in isolation.
Those were hard meetings. They were also the ones making the most difference because they reconciled economics and client strategy before new commitments were made rather than after. Getting the integrated view before approval was the whole point.
Previously, pricing review had been episodic. Deals got elevated when exceptions were visible. Over time, we refined triggers so roughly 90% of decisions stayed in the line, with a minority escalated for broader review.
The ratio was intentional. You can’t centralize your way to discipline. The forum challenged assumptions when risk, capital, or structural complexity warranted it, and concessions were debated explicitly with deposit expectations clarified and expected trajectory documented. Early on, we allowed a well-defended exception without formalizing the economic expectation attached to it. It seemed practical at the time, but it later complicated enforcement because we’d blurred the standard and the next conversation became harder to hold.
After decisions were made, measurement closed the loop. Utilization, deposit behavior, capital intensity, and return trajectory were compared to assumptions embedded at approval. Drift surfaced quickly. We also discovered operational bottlenecks having nothing to do with pricing: staffing gaps, training needs previously invisible, officers who were consistently overly optimistic in their account plans. Before the rhythm stabilized, forecast variance required narrative defense late in the quarter, but afterward dispersion tightened before averages improved and the explanation cycle shortened.
I should be upfront about something. The whole four-stage cycle was cobbled together outside core systems while the bank modernized its CRM and loan origination infrastructure. Bankers complained about redundancy, and they weren’t wrong because data entry was clunky and the friction was real.
But the rhythm mattered more than the tooling. As systems improved and eventually integrated into Q2 PrecisionLender, the mechanics got smoother, but the principle stayed the same.
Adapted from “Beyond Pricing: Disciplined Performance. Real Impact.”