BLOG

The Backpack Problem: Why More S+C Tools Have Slowed Coaches

Background image.

Eric Wynalek

CEO

5/1/26

Gavin Benjafield, Performance Director at LAFC, said in the Upside Newsletter this April that should reframe how every elite performance department thinks about its technology stack. His position is that we have so much information at our fingertips that practitioners are actually becoming distracted by it, and that the constant pressure to add new tools, new technologies, and new ideas is often diluting the work that was already getting done well.

He used the analogy of a backpack. Every new platform, every new sensor, and every new dashboard adds weight, but athletes do not get more time and staff do not get more capacity. What you actually end up with is a heavier load and slower decisions.

This is the pain we have heard from S&C and sport science staff at every tier over the last six months, and it deserves a closer look. The data collection is fine, the dashboards are fine, and the integrations are mostly fine. The decisions are slower than they were five years ago, and the question worth asking is why.

The collection problem was solved a decade ago

Wearables matured. Force plates became affordable. GPS-based local positioning systems went into every elite training facility, and subjective wellness apps proliferated. By the late 2010s, an S&C department at a serious program could collect more data per athlete per session than the previous generation of practitioners could process in a season.

What did not mature at the same pace was the architecture between collection and decision-making. Each tool shipped with its own dashboard, its own export format, and its own login, which meant the integration work was left to the staff. The senior person on the team became the manual translator between five different platforms, reconstructing the athlete picture every Monday morning from disconnected exports.

The collection is solved. The integration is mostly solved. The decision-making is where the work is still breaking down.

The Tuesday test

Here is a simple diagnostic any performance director can run on their current stack. Open your platform on a Tuesday morning, before that day's training, and ask one question. Does the data on the screen change what you assign in the next ninety minutes?

If the answer is yes, your system is working as a decision-making input. If the answer is no, you have a record. Records are useful for retrospective review, but they are not useful for the decision you have to make in the next ninety minutes. When the platform is a record, the decision is still happening in the coach's head while the data sits in a tab somewhere nearby.

Most platforms in market fail the Tuesday test, because they were built for the look-back rather than for the next decision. They were never architected to close the loop between data collection and programming.

What the simplification thesis actually demands

The future Benjafield described is not fewer tools as a slogan. It is fewer inputs and sharper decisions. The platform builders who win the next decade will not be the ones with the most features. They will be the ones whose features actually fire decisions.

This requires a different architecture, and the architecture comes down to three capabilities a serious S&C platform has to hold. First, the platform has to be where the program is built and assigned, rather than a delivery layer downstream of where the real thinking happens. Second, the platform has to support custom metrics, because the data points that map to how a specific sport wins are rarely the data points that ship in a default library. Third, the platform has to encode decision logic, so that data updates can drive programming changes through rules the coach defines in advance, instead of data sitting on a dashboard waiting for the coach to act on it manually every morning.

Three capabilities. That is the right number for the foundation.

What you actually keep when you cut the stack

We have run this exercise with performance staff at every tier, and the coaches who actually went through it kept three things in their core S&C stack.

The programming layer. This is the system where the work is built, assigned, and modified. It is the foundation, and everything else is downstream of it. If your platform is not where you build programming, then your programming lives somewhere else, which means you are running two systems where you should be running one.

Custom metrics. Not the generic 1RM, RPE, and volume library that every off-the-shelf platform ships with, but the specific metrics that map to how the sport, the position, and the athlete population actually win. A vertical jump percentile against your own roster matters. A generic industry average does not.

Decision-tree logic. This is the rules layer that turns data into a decision. If an athlete tests below threshold, the regression assigns automatically. If a wellness score drops, the athlete routes to a recovery group. The decision-making is encoded into the workflow rather than held in the coach's head, which is what makes it survive staff transitions and scale beyond a single coach's attention.

That is the right number of platform capabilities for the core S&C stack. Three. Anything outside that list is weight in the backpack, unless it is solving a discipline-specific problem that genuinely cannot be folded into the core.

The math on the backpack

If your stack has more than a handful of active platforms doing distinct jobs, the question to ask is not what to add but what to cut. Run the math on what each tool actually returns, and be honest about the difference between what it collects and what it changes about Tuesday's session. The platforms that fail the Tuesday test are the ones whose value proposition is more data without a clear answer to what the data does once it lands.

Cut what fails the test. Keep what fires decisions.

The architecture of fewer, sharper

FYTT was built around the simplification thesis, and the three capabilities above are the foundation rather than the marketing. The Plans engine holds full annual periodization in a single timeline, with mixed participants and program-to-calendar propagation that updates automatically as the plan changes. The Metrics Hub lets you define any metric you need, with custom formulas, override values for return-to-play, and percentile tracking against your own institution. Automations encode decision-tree logic that fires automatically on metric updates, schedules, or team membership changes, so the rules a coach used to keep in their head live in the workflow instead.

If you are running a performance department where the data is rich and the decisions are slow, the diagnosis is not your work ethic. It is your tooling. The backpack does not have to be this heavy.

Background image.
Weightlifter training.
Background image.

Upgrade Your Strength and Conditioning System

Join 50+ performance organizations using FYTT to automate programming, individualize training, and apply sport science at scale.

No credit card required. Cancel anytime.

Background image.
Weightlifter training.
Background image.

Upgrade Your Strength and Conditioning System

Join 50+ performance organizations using FYTT to automate programming, individualize training, and apply sport science at scale.

No credit card required. Cancel anytime.