Marketing Has An Honesty Problem

If you've ever worked in a large company, you've probably been amazed at the lack of cohesive reporting that transcends across marketing teams.

Each part of the funnel carries its own processes, trackers, KPIs, and technical baggage.

Naturally, this leads to a lot of questioning about data validity. That is, not just asking "Are we aligned in what we are measuring?" but "Are we actually measuring what we think we are?".

Most folks who have worked in large organizations will agree it's an open secret that books are often cooked. Favorable definitions, questionable framing of charts, industry-wide benchmarks that are impossible to verify, and so on.

This doesn't necessarily indicate nefarious intent (although empire builders are prevalent in these companies) but is often a result of teams doing the best they can with the resources they have. The error lies in the lack of admission of uncertainty, or what statisticians refer to as a confidence level.

The first step in addressing this is tooling, and one of the worst cases is often in social. It's a unique venn diagram of platforms that measure everything differently, reporting tools that have disparate levels of API access, endless content types, and a general lack of internal understanding of the specialization.

Tools try to compensate for this with custom metric definitions and elaborate tagging schemas. However, as any social manager will tell you, too many hours of their lives have been lost to retagging content as products or campaigns change, or fighting convoluted filters and narrow widgets just to maybe get the right selection.

You can imagine how these issues are amplified by many orders of magnitude when measuring broader marketing metrics like earned mentions or ecosystem participation. Think keyword based share of voice or a frankenstein aggregation of third party forum participation, downloads on platforms you don't own, utilization of partner tools, and so on.

The consequences of this go beyond accuracy and extend to the insights drawn from the data. Many teams simply stop at reporting numbers after they've spent considerable effort and time collecting data. After all, the whole point of reporting should be to inform actions.

AI is an obvious consideration in solving this problem, however until platforms provide robust access for agents to efficiently and securely gather and analyze data, this remains a fragmented and frustrating option for the non-technical among us.

Bringing your own data is a slightly more manual but promising idea. Rather than trying to connect an agent to disparate platforms or using AI features built into reporting platforms, feeding all of your data to one locally owned place allows centralization with frontier intelligence.

The beauty of this is it can be so simple it hurts. A spreadsheet with all of your content and its relevant info paired with your favorite agentic tool. This type of setup can also eliminate pesky tagging structures by querying the content itself rather than taxonomy trees. Through this, there's no thought that needs to go into data organization; every piece of content innately contains every "tag" it should hold.

That said, tools, no matter how intelligent, are only as good as your use of them. More important to solving this issue is visibility, specifically in connecting KPIs to see where the ship is headed, rather than what task each crew member is performing.

For this goal, I would propose weighted KPIs into which specialized team metrics roll up. Let's take the broad buckets of "engagements" as an example. The social team is tracking post interactions, the community team is tracking event attendees, the email folks are tracking opens, and so on. From the top down, how do you equate these things to understand their relative performance and meaning? Obviously, an event attendee has a higher level of interaction than liking a social post, but that's a qualitative understanding. Most teams operate under this intuition without ever formalizing it, leading to missed insights.

By broadly bucketing these as "engagements", with each one carrying a differing mathematical weight, we can get a real understanding of where growth is happening. This can be carried all the way up or down a funnel, with differing weight values applied at different stages to get a holistic view.

Too often, you get a report in your inbox where the only mention of how the data was derived is a small asterisk at the bottom of a table. Data methods should be collaborative endeavors, with methodologies openly displayed, constantly open to criticism and refinement from those outside each specialization.

The teams who will get reporting right are not the ones with the latest AI tools or specialized KPIs, but those willing to tackle paradoxical incentive structures and refactor the underlying norm of confidence projection in the presence of uncertainty. To that end, advocating for accuracy and being the first to admit your reports are not perfect is a great place to start.