Rand Fishkin @randfish Founder @SparkToro, husband to @everywhereist, feminist, he/him, author of Lost & Founder. Formerly @Moz. I tweet ~40X/wk on marketing, tech, & startups. Jul. 25, 2019 1 min read

Like a lot of marketers, I really appreciate & get value from Merkle's quarterly reports (e.g. their latest  https://www.merkleinc.com/thought-leadership/digital-marketing-report ).

However, I think it's irresponsible to cite this data without explaining methodology and built-in biases. /1

Whenever, for example, I make charts like this, I include source and some methodology, then link to more info & detail (e.g.  https://sparktoro.com/blog/how-much-of-googles-search-traffic-is-left-for-anyone-but-themselves/ ). Merkle themselves do this well. But many who cite & write about this data aren't. That's unwise & misleading; here's why... /2

SEJournal is far from the only culprit, but their headline yesterday is a good example of this problem:  https://www.searchenginejournal.com/google-is-delivering-less-organic-search-traffic-than-last-year/318109/  It reads: "Google is Delivering Less Organic Search Traffic Than Last Year." That's probably not true.

What's actually happening?
- Google's mobile search app (among others) is sending a ton of "dark "traffic"
- Desktop traffic is flat (w/ slight decline in organic click rate)
- Mobile browser search is also plateauing
- Merkle's clients are losing a little more than the avg site

Methodology is crucial, b/c:
- Merkle shows data from their clients' websites (granted, they have a lot of big sites)
- Jumpshot shows data from their panel's devices (granted, that's 100M+)
- StatCounter shows data from sites they're installed on
Each have known (& unknown) bias


You can follow @randfish.



Bookmark

____
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.

Enjoy Threader? Sign up.