Back to home

You Don't Need Better Attribution

Most early-stage SaaS companies don't have an attribution problem. They have an expectations problem: the belief that better attribution data will make investment decisions make themselves.

Maarten Van den Bossche
Maarten Van den Bossche 8 min read

Every few months, a SaaS founder asks me the same question: "Should we invest in Dreamdata? HockeyStack? Which attribution tool should we use?"

My honest answer: probably not yet.

These tools are technically impressive. They can track multi-touch journeys, assign weighted credit across channels, and build dashboards that show exactly how 47 touchpoints contributed to a closed deal.

The problem isn't the data. It's the assumption that once you have the data, the investment decisions will become obvious. They won't. No dashboard, no matter how detailed, will look at the numbers and tell you: "Put more budget into LinkedIn and pull back on Google Ads." That decision is still yours to make. And better attribution data doesn't make it easier. It often makes it harder, because now you have seventeen conflicting signals instead of three.

The attribution trap

Here's what I've seen play out at multiple companies:

Week 1: The team is excited. "Finally, real attribution!"

Week 4: The dashboard is too complex. Leadership asks for a simpler view.

Week 8: The team rebuilds it simpler. Now it's too simple. "We're losing nuance."

Week 12: Back to complex. No one looks at it.

Months of resources. Zero better decisions.

What most SaaS companies try to build is one dashboard that is simultaneously simple enough for a Monday morning standup and nuanced enough to capture every touchpoint in a buyer's journey.

That dashboard doesn't exist. And the more time you spend building it, the less time you spend doing marketing that actually works.

What a deal actually looks like in 2026

The reason perfect attribution is impossible isn't a technology problem. It's a reality problem.

A deal today doesn't follow a neat funnel. It looks more like this:

1
Someone sees your LinkedIn post
2
They ask ChatGPT about your category
3
A colleague mentions you in a meeting
4
They Google your brand name
5
They click a Google Ad
6
They book a demo

LinkedIn will say it's their lead. Google Ads will say it's their lead. Every tool in that chain will claim credit. None of them are wrong. None of them are fully right.

And that's just the journey you can see. What about the podcast they listened to where someone mentioned a competitor and they thought "I should also look at alternatives"? What about the Slack message from a friend in another company? Those touchpoints are invisible to every attribution tool ever built.

The volume problem

There's another issue that rarely gets discussed. Most early-stage SaaS companies (€500K to €5M ARR) simply don't have enough deal volume for multi-touch attribution to be statistically meaningful.

With 15 to 30 deals per quarter, every attribution model is just a handful of data points wearing a trenchcoat pretending to be an insight. You can build the most sophisticated model in the world, but when your sample size is 20 closed-won deals, the output is noise dressed up as signal.

What to do instead

I'm not saying attribution is useless. I build attribution systems for clients all the time. But I think about it in layers, and I think most companies should stop at "good enough" rather than chasing "perfect."

Layer 1: Collect the raw data

Get everything into HubSpot. Don't analyze it yet, just capture it.

  • LinkedIn engagement through Fibbler. Organic and paid engagement per company, overall and per campaign. This gives you visibility into which target accounts are engaging with your content before they ever fill out a form.
  • Website visitor deanonymization. Know which companies are visiting your website, even without a form fill. This is especially valuable for understanding whether your demand generation is actually driving awareness among your target accounts.
  • Proper UTM tagging. Campaign, source, and medium on every link you put out there. This is table stakes, but you'd be surprised how many companies still don't do it consistently.
  • Self-reported attribution. "How did you hear about us?" is the most underrated field in your CRM. Add it to every demo request form, every signup flow, every content download. More on this below.

And more. Event conversions, ad platform data, whatever other signals are relevant to your channels. The point is to capture broadly so you have data to work with later.

Layer 2: Build one simple attribution field

For every Demo Request, Signup, or Content Download, build one simple field that captures the main attribution source. This field should be informed by two inputs:

Technical attribution tracks what the system saw. First touch, last touch, UTM source, landing page. It might tell you: "Google Ads, Branded Search."

Self-reported attribution captures what the human remembered. When you ask "How did you hear about us?", they might say: "A colleague recommended you."

These two will often tell very different stories. And that's the point.

Technical attribution shows you how someone arrived at your website. But it doesn't always show you why. Search is consistently over-reported in technical attribution because it's often just the navigation method after another touchpoint. Someone heard about you on LinkedIn, then Googled your name. Google gets the credit. LinkedIn is invisible.

Self-reported attribution isn't statistically rigorous. People don't always remember correctly. But it captures something no tool can: what was memorable enough for a real person to bring up. It gives your prospect a voice in how they discovered you.

Neither is perfect on its own. Combine them and you'll know a lot.

Layer 3: Ask better questions

This is where most companies should spend the majority of their energy. Instead of asking "which channel generated this deal?", start asking:

  • Are our unit economics healthy? Is CAC, SAC, MAC trending in the right direction? If your cost of acquisition is going down while pipeline is growing, your marketing mix is working as a system, even if you can't attribute individual deals to individual channels.
  • Are we happy with our channel mix and what we're putting out there? This is a qualitative question on purpose. Look at your own LinkedIn ads, your landing pages, your email sequences. Is this actually good? Would you click on it yourself? Would you read it? If the honest answer is "not really," no attribution model will save you.
  • Do prospects mention our touchpoints on calls? When they get on a call with your AE, do they say "I've seen you guys everywhere" or "I read that article you published"? When prospects voluntarily reference your marketing on sales calls, that's attribution that no dashboard can capture. And it's the most reliable signal you have.
  • Would I notice if this channel disappeared tomorrow? If you turned off LinkedIn Ads tomorrow, would your pipeline look different in 90 days? If you're not confident the answer is yes, that tells you something about that channel's real contribution.

So how do you actually decide where to invest?

This is the real question hiding behind every attribution request. When a CEO asks "How much pipeline is LinkedIn generating?", what they're really asking is: "Should we invest more in LinkedIn, or put that money somewhere else?"

Attribution data alone won't answer that. But there is a way to think about it.

List every realistic next marketing investment. Not your dream list. Your actual options for the next quarter. Increase budget on Google Ads campaign X. Write more SEO content. Build a webinar series. Double down on LinkedIn thought leadership. Launch an outbound motion. Be specific about what "investing more" actually means.

Stack the signals for each one. This is where your attribution data from Layer 1 and 2 becomes useful. Not as the answer, but as one input among several. For each potential investment, look at the data you have (what are your attribution fields showing, what does the self-reported data say, what are the engagement trends), the qualitative signals (are prospects mentioning this channel on calls, is the content resonating, does the team believe in this direction), and your gut. You've been doing this long enough. What feels like it's building momentum, even if you can't fully prove it yet?

Rank by conviction, not by a single metric. No formula will spit out the right answer here. Rank your options by how confident you are, using all three signal types. The data, the qualitative signals, and the instinct together.

Fund from the top until you run out of budget. Start with your highest-conviction bet and work your way down. This sounds obvious, but most companies do the opposite: they spread budget evenly across everything to "not miss anything," which means nothing gets enough investment to actually work.

Protect 10 to 15% for experiments. These are your 0-to-1 bets. Channels you haven't tried, formats you're curious about, ideas that don't have data behind them yet. This is where you discover what's next without putting your core pipeline at risk.

Revisit every quarter. Be honest with yourself: one thing should go up, one thing should come down. The companies that struggle most aren't the ones that pick the wrong channel. They're the ones that switch direction every six weeks, never giving anything enough time to compound.

This is messy. It requires judgment. It will never feel as clean as a dashboard with percentages. But it's how real marketing investment decisions get made, and it's a far better use of your time than chasing the perfect attribution view.

The bottom line

An attribution dashboard, no matter how sophisticated, won't tell you the most important things: where you're not showing up, which conversations you're not part of, which audiences don't know you exist yet. And it won't make your investment decisions for you.

What it can do, combined with self-reported data, qualitative signals, and honest judgment, is give you enough to make a confident bet. Not a perfect one. A confident one.

If something in your marketing is really working, you won't need a 3-month data analysis project to see it.

It'll stare you in the face, breathing heavily.

Stop perfecting the measurement. Make the call. Ship the next campaign.

Need help building your marketing ops stack?

I help early-stage SaaS companies set up attribution systems that actually get used. Not perfect dashboards that gather dust.

Let's talk