Jim Spanfeller currently runs the Spanfeller Media Group, has the title of chairman emeritus of the Internet Advertising Bureau and was the CEO of Forbes.com, so he knows a few things about Internet publishing and advertising. Speaking with VentureBeat, he outlined the major problems with the current system of Internet marketing: complexity, lack of transparency and ad fraud, which is largely made possible by the first two. Along with the systematic problems created by depending on algorithms and automated systems, reliance on third-party data isn’t doing advertisers any favors.
Everyone relies on computers, and they shouldn’t
I don’t think it’s a stretch to say that computers have revolutionized the way that advertising is handled. I mean, clearly, you need computers to do the whole Internet advertising thing, but algorithms and automated systems have allowed advertisers to create ad campaigns that are responsible for millions of ad impressions on hundreds of thousands of sites.
And these ads don’t just show up randomly; they’re (supposedly) assigned to not only sites that are likely to have a receptive audience, but in some cases, to specific individuals. Just looking at the number of impressions, there’s no way a human being could even begin to handle this task, but a computer does it with ease.
However, the problem with this is that when you leave things to computers, you’re assuming that the computer is doing its job properly. There is very little verification done to ensure that ads are showing up in the right places for the right people, and of the checking that is done, the results are pretty alarming.
We recently wrote about how advertisers are expected to spend a billion dollars next year on mobile ads that will never be seen due to mobile malware. However, that’s just mobile; back in 2013, the Atlantic shared a report that indicated 61.5% of Internet traffic was from bots. Another recent report, summarized by AdWeek, from the Association of National Advertisers and WhiteOps, indicates that $6.3 billion will be wasted due to bot activity.
Now, this isn’t technically due to computer algorithms running the show; in fact, it’s more likely due to publishers and the agencies that go out of their way not to look for indications of bots. However, automated systems make it far harder for an advertiser to discover that many of the impressions and even clicks their ads are getting are made by bots, not people.
Premium helps, but it doesn’t solve the problem
Spanfeller notes that random sites, like brushmyteeth.com, can easily end up in an exchange, and with little control over where ads are displayed with programmatic advertising, ads may be served just about anywhere. He notes that using premium ad exchanges can lower the amount of fraud and the likelihood that an ad will show up on an iffy website, but it’s not a panacea.
The aforementioned Association of National Advertisers and WhiteOps indicates that 10% of traffic on premium campaigns is driven by bots. A better number, but it’s still 10% of an ad campaign being wasted.
Due to the volume of ads and websites they can be displayed to, there may not be a realistic way to completely eliminate fraudulent clicks and views from bots. Bots are specifically designed to imitate the movements of regular Internet users – some have thousands of possible routines – meaning that they are very, very difficult to identify, even when you have an actual person observing.
Ensuring that websites are legitimate and of a minimum quality before being added to exchanges would be a good way to help reduce fraud, but it’s still not a silver bullet. It’s not difficult to create a well-designed site and use it for nefarious purposes. The fact that many ad exchanges are happy to collect the money from fraudulent impressions and clicks doesn’t bode well for improved detection methods.
Third party data or a Ouija board?
First-party data is information that a business has collected itself about a user, and it is very useful. Third-party data is collected from, well, a third party, and according to Spanfeller, about as helpful as a chocolate skillet.
Although Facebook has made major strides with Atlas, which follows people across browsers and devices based on Facebook logins, most tracking is based on hope and guesswork. In many cases IP addresses are used for tracking, but several devices can connect to a single IP address with a network.
The major issue to me with even trying to collect information about a person via a computer or even mobile device is that many people share devices. Additionally, people may do searches for someone visiting. If someone looks up information on insulin for a visiting family member, they are probably going to see advertising for managing diabetes, even though they don’t actually suffer from the condition.
With Facebook, you’re more likely to get information about one person, but the information may form an incomplete picture. If I do have a medical condition, I probably won’t talk about it on Facebook, but I will probably do a Google search at some point. Google will also know when I search for a quick, easy recipe using chicken, but unless I post my meal on Facebook, social media won’t.
This is likely why Spanfeller states that scientists he’s spoken with have basically laughed at the idea that third-party data is accurate. While it isn’t always wrong – I frequently see ads that are well targeted to my interests – I’ve also seen several ads in Spanish, which I don’t ever speak unless I’m mispronouncing my order at Taco Bell.
At the end of the day, there’s a lot going wrong with online advertising, but much of it can be fixed by recognizing the issues and addressing them. Bots are not likely to be eliminated any time soon, but more honesty in managing ad networks can probably reduce the damage they do. Likewise, recognizing the limitations of third-party data can probably reduce wasted ad dollars.