Ahead of his participation in a panel discussion about data on the buy side at ATS Paris, ExchangeWire spoke with Shane Shevlin, IPONWEB’s EMEA Commercial Director (pictured below), about the rise and rise of header bidding, where publishers need to be vigilant to be successful; and the striking parallels between publisher header bidding and buy-side container tag technology.
In early 2006 I remember sending a panicked email to the DFA product manager at DoubleClick. I was managing the implementation of a Floodlight retagging exercise for a large global travel advertiser and something was out of kilter.
The killer idea behind Floodlight was that an advertiser/agency could manage how website conversions and sales from a specific digital channel, or ‘network’ traffic acquisition partner, were attributed. The technology inside was ‘front end de-duplication’; a technique that leveraged a user’s unique cookie ID and the timestamps recorded when they clicked or viewed ads before converting. Using this tech, together with DFA tracking tags, you could bring external channel clicks and views into the DFA reporting engine; showing which channels/partners drove sales and when they did so. Not too dissimilar to what many header bidding kits allow you to do with buyer interest signals today – except in reverse.
The feature helped us win big on the direct advertiser new business front; and pan-European advertisers, in particular, loved the idea of more efficient tracking via a single source of truth. Add real-time, flattened attribution management across 10+ territories and languages inside one platform and you had a very sticky product.
The problem was that Floodlight version 1.0 contained a big, fat bug and the real-time user de-duplication wasn’t firing correctly. The cause was obscure and took our engineers longer than it should have to pinpoint it. Being on the frontline was painful, with calls every half hour from different digital channel managers. The numbers didn’t make sense. Client feedback showed a fairly predictable split on credit due per channel for new client acquisition, which wasn’t reflected in Floodlight’s figures. The engineers finally found the source of the bug – it was linked to the fact that DFA treated previously unrecognised users differently from users for which it had already dropped a cookie. The knock-on effect was mass chaos for our travel client: our beta partner, which was seriously ramping up digital marketing budget across 10+ different channel partners.
The episode brought home how much power Floodlight wielded: a container tag technology that could play gatekeeper on who got paid what, based on what it said was the reconciled truth, after budgets had already been spent. If your channel partner’s numbers didn’t tally with DFA’s, payment was often withheld until the discrepancy had been understood. However, should the tag be implemented incorrectly, or a bug find its way into the system logic when executing the JS on the page, real-world money was lost by the deserving, and the less deserving earned unfairly.
The bug fix was rolled out at lightning speed and DFA grew its market share massively. Advertisers loved it because it helped hold channel partners to account against the gold standard in tracking technology. With the help of the agency operating companies being prodded along by their biggest clients, we cleaned up on the new business front…
The parallels between publisher header bidding and buy-side container tag tech are striking to me. In a somewhat ironic twist, header bidding is essentially a mirror image of how Floodlight works, a mechanism being exploited in an attempt to claw back against Google’s continued and relentless landgrab – this time on the sell side. The underlying tech is basically the same, except in reverse: outputs from header bidding don’t impact attribution of payment to traffic acquisition partners, but rather where, from, and how much revenue flows from demand partners buying ad space on your placements. The concept of header bidding is an open one – the publisher can choose to implement it as it pleases, using JavaScript that orders real-time buyer interest at priorities decided by the publisher itself (usually inside the primary ad server). Bid value transparency is frequently cited as a key ‘win’ when using header bidding.
There are already many interested parties vying for dominance in asserting their own particular flavour of this open and free concept. After all, controlling preemptive information about publishers’ inventory and user data is what helped high-growth businesses like Criteo assert their dominance in the past; and others now see a chance to reciprocate.
I see three areas where publishers must be vigilant:
1) Reduced operational efficiency and risk to editorial SLAs with extra page load
2) Unrealised revenue opportunities from loss of data control
3) Adapting quickly enough to the transience of budgets and where they live over time
To many programmatic purists, header bidding is akin to the Emperor’s new clothes – attempting to recreate the net opportunities open RTB standards technically afford already. The issue is one of failed delivery on high publisher expectations from supply-side programmatic tech, and more specifically on control and transparency. Header bidding is gaining popularity as a known quantity, a sweet spot of from where you expect your business to come. As the ‘smarts’ are executed by code deployed in the publisher’s own source code, there is the impression of additional control.
The challenge is deciding on a sensible number of preferred demand partners, and which demand is most relevant and lucrative to you. An equally effective strategy for publishers to regain control and future-proof themselves against demand-side market moves, is to hold SSP partners to account, or control outright their own unified ad server + auction environment.
Depending on how well header bidding has been implemented, publishers may see real impact on page load times; the scramble from third parties to bring forward ‘open’ header bidding APIs means the design and quality of solutions varies greatly. The decisioning can be taxing on the user’s browser due to high data volumes being transmitted – a problem exacerbated by the sheer number of potentially high-spending buyers out there. Whether decisioning is synchronous or asynchronous is also a factor.
There is no denying that, with current market dynamics, header bidding allows publishers to get closer to the deal again and to secure higher spends from preferred buyers – a much needed shot in the arm in a tough market. But how sustainable will this uptick be, when the increase in demand-side programmatic activations continues unabated? Will rearchitecting how you sell around header bidding be best for the longer term? As open market buying volumes continue to grow, there are potentially hundreds of higher bids for your impression that go unseen when an impression is bought over header bidding. I’m not saying this is the case outright today; but implementing the same wide-angle view of the market via header bidding is potentially a very tedious ask. And much like buy-side container tag tech, the ‘short-circuit’ decisioning layer potentially curtails the flow of money, this time to your detriment. In a somewhat bizarre practice, some header bidding implementations take signals that come back from preferred buyers and incorporate them into a unified auction – maybe bringing back visibility on open market demand – but isn’t that the exact same thing as an open RTB auction? And won’t buyers feel cheated that their ‘first-look’ has been eroded, and either push their spend back through the ‘normal’ open-RTB channel, or simply spend elsewhere?
The often-poor performance and abuse of second-price auction mechanics by some performance bidder technologies is partly why header bidding exists in the first place. That being as it may, open auctions will continue to be visited by an increasing number of RTB buyers and, most notably perhaps, more direct marketers who are standing up their own tech. Not everyone can have a seat at the header-bidding table; it’s just not that scalable. And fewer buyers equals less density, equals less competition – ultimately driving price down.
For exchanges representing their best ‘managed demand’ price, publishers should remember that not all exchanges are created equal. Relying completely on a third-party to secure known and unknown demand via their platform simply won’t cut it. Publishers should be heavily promoting and exerting high level of controls themselves on this task. And, while RTB auction tech vendors have some way to go before regaining trust, remember ALL required data points and levers are available inside open RTB supply-side tech to regain both trust and control for the publisher. But, it should happen less on behalf of, and much more in partnership with, the publisher in future.
Through RTB standards, ‘commodity accountability’ can be offered by publishers and, thereby, the buyer’s safety (ensuring that the commodity is legitimate). On the question of data control, many publishers, and their supply-side partners, have missed a trick. Serious buyers crave reliable reach and demographic signals and are frequently prepared to pay handsomely for it. The ability to propagate first-party data signals over the RTB bid stream has been underexploited by publishers to date. Header bidding does not afford the same level of control in such an endeavour – quite the opposite in fact. With header bidding, the onus is on the buyer to ‘collect’ the signals (user/content/context, etc), whereas with an RTB-sold impression, the publisher is responsible for ‘pushing’ the signals in the first instance and, thereby, afforded a better opportunity to monetise them.
Exerting this level of control over a unified openRTB auction, together with ad server functionality, is still the most effective way to future-proof for broader market interest in your wares, as programmatic spend continues to grow unabated. Publishers should try to focus on understanding where their market lives today – actively investigate practices like bid suppression and other DSP internal auction dynamics which don’t work in your, or your brand partners’, favour. Understand from brand owners what a fair price for your inventory is, based on real market demand. If hotspots have higher than average demand, raise your prices unashamedly and never make a premium impression available on multiple platforms – this practice may increase yield short-term but will certainly dilute your brand and suppress CPMs long-term. If you truly believe your inventory is worth more than you’re getting today, hold your ground and figure out how to get your buyers to the correct point of sale so you regain control of the transaction. Far from being the poor cousin in the ad tech chain, publishers have the potential (with the right technology and ethos) to provide buyers with their dream scenario: full transparency and accountability, and with that, a new era of publisher revenue.
Shevlin will be participating in a panel discussion on ‘How the Buy-Side Can Make Data the Key Component to Success in Programmatic’ at ATS Paris on 13 April. Find more information here.
To see the original article, please visit ExchangeWire.com