Ten things we learnt from this year’s asi Conference

asi’s Research Director, Richard Marks, reflects on the main themes to have emerged from this year’s event and the challenges ahead

For five three-hour sessions spread across the week of 1-5 November, over 300 media professionals from 43 countries gathered virtually to discuss and debate the present and future shape of media and its measurement. Now that the dust has settled and we have had time to reflect, Mike Sainsbury has asked me to outline my main takeaways from the conference.

These are personal views designed to provoke debate and hopefully you may well disagree with a few of them. ‘Hopefully’ as that may prompt you to share your own thoughts about the themes from the event, thoughts which we would be more than happy to share here and keep the debate going amongst our community.

So, with that caveat out of the way, here are my ten takeaways, in no particular order:

1. Electronic radio measurement is hot again, but is it more than just a marriage of convenience?

Those who attended our radio events in the Noughties will recall the heated debates around the introduction of PPMs and wristwatches, as Matthias Steinmann, Jay Guyther and others fought it out onstage with each other and the naysayers. Whilst a number of countries adopted metered measurement and nearly all have happily stayed with it through this decade, after that initial first flush around 2005-10 the impetus seemed to stall to the degree that electronic measurement has barely been mentioned at asi over the last few years. Now it is back, with the Netherlands, South Africa and Australia revealing plans to use metered measurement to varying degrees, and the UK having already done so.

What is driving this? Well, the pandemic has exposed the fragility of diary-based data collection, making systems that are less prone to disruption more attractive. Meanwhile, there is continued pressure from the wider industry to (at least be seen to) modernise radio and audio measurement.

It’s important to note that the way in which the meter element is being used varies wildly across these four new markets. The Dutch are just ‘going for it’ with a complete switch to Ipsos’ meters and also a move to TV-style metrics to compete against TV and video in the same analysis systems. In the UK, the BBC Compass Panel is an element of the newly- adapted RAJAR service, but diaries from sweeps and a meter panel are what are really driving the system and the meter panel may not be a long-term element. In South Africa the plan is to introduce an Ipsos meter element next year in a hybrid system, whilst in Australia the GfK meter panel will sit alongside the currency to ‘calibrate and validate’ but not actually be inside it.

So, with the exception of the Netherlands, recall continues to rule the roost in these newly-announced contracts. The emphasis is on hybrid systems, combining recall (diaries or day-after surveys) with electronic – and also potentially streaming census data – in the mix. That leads to my second point…

2. In hybrid systems, who decides where to set the needle?

In the presentations and discussions on our radio day, it was apparent that a lot of statistical work is being done to combine diary/recall data with electronic data. This raises a question that extends beyond radio to all forms of hybrid measurement. When you have two (or more) measures of the same thing, who determines where the needle is set when you combine them? Whether it is radio combining recall and meters, or TV combining streaming census and meter panels, the decision about where to set that needle has profound consequences and the suspicion lingers that any balance may be determined as much by political pressures as statistical analysis.

Let’s return to radio as our example. Veterans of the PPM wars at asi over the years will know that diaries tend to produce far higher overall hours than meters – particularly at the breakfast peak – whereas meters often produce higher reach over time. So, if I were a radio group my preferred hybrid solution would take the elements that are highest – the reach from meters and the hours from diaries. Reading between the lines on our presentations this year, it would seem to be the case that the tendency is to model the meter data to make it look more like diaries than vice-versa. I understand the political motivation for that, but from a budgetary perspective it does seem rather like buying a sports car and then only using it to drive to the shops.

This issue of transparency has been touched on many times at asi in the past, usually with the FAANG companies in the crosshairs, but the more the established silo currencies move towards hybrid systems, the more reassurance the industry will need about how the fusion decisions are taken. Put simply, data scientists will now hold more power over the currency than the interviewers or respondents ever did.

3. Is currency measurement producing the right metrics to truly understand the effectiveness of content as opposed to advertising?

The WFA North Star has understandably dominated the measurement debate over the last couple of years with the focus on providing what advertisers want. But what do content owners want? It was clear from our Tuesday session that subscription services and PSBs share the advertisers’ obsession with outcomes, but their outcomes and KPIs are very different. They may be about retaining and attracting subscribers, delivering value to licence-fee payers, maximising the audiences they can offer to advertisers and determining the optimum balance across platforms, between linear and on demand and between ad-funded and subscription models.

Can currency metrics really serve the needs of both content and advertising, or may different approaches be needed? On the face of it, it might seem ludicrous to contend that a system like Parrott Analytics, reporting something as nebulous as global online ‘demand’ could supplant the TV currencies. Certainly, for the advertising industry it is irrelevant, but for those in the business of optimising their content to maximise audiences it could yet prove an important supplement to, if not a replacement for, audience ratings. It is noticeable that, when it comes to VOD reporting, Nielsen has moved away from broadcast-style metrics to volumetric analysis – ‘billions of minutes’ – to attempt to allow comparisons between a two-hour movie and 236 episodes of Friends.

There can often be debate about whether currencies can deliver to both planning and trading requirements. To that we can add the debate about how well they serve the needs of both content and advertising. Taking the US as an example, is measurement shifting to establish more of a balance of power between the East Coast ad industry and the West Coast entertainment industry?

4. A free market for data driven by the public themselves as data owners.

I was struck by the simplicity of the approach that Digital-i uses for its SODA service to measure SVOD consumption. As opposed to router meters, audio-matching or recall studies, Digital-i simply gathers the data that subscribers are legally entitled to request from the services it supplies, turning GDPR legislation to its advantage. It’s an idea so simple it could well catch on. There are no worries about measuring across devices, although obvious issues remain about how the viewing is attributed to demographics in the household given the incidence of shared log-ins (which extends to outside the home as well). This sounds like a challenge that data science, or access to other data sources, can overcome.

I was reminded of the presentation that Oliver Pischke of Kantar gave at our 2018 event in Greece, in which he outlined how blockchain could be a means for bringing about a new model for data gathering, in which individuals exercise their right to access their own data records with the services they use and then make that available to the highest bidders via a system of micro-payments. Effectively a decentralised approach, such a model could reduce the barriers to entry in crowd-sourcing large samples.

5.  Attention is a hot topic, but don’t expect to see it in a currency any time soon.

It is clear that momentum is gathering behind the use of attention metrics to evaluate the relative value and impact of video advertising and our Wednesday session reflected this. Over the last few conferences, we have seen the work of pioneers like Karen Nelson-Field move from theoretical pilots to real-world applications. Meanwhile, the Attention Council continues to lobby for wider adoption of attention in media planning. There have also been initiatives to suggest common metrics such as Ebiquity’s proposed aCPM.

This may all make sense, except when it comes to the talk of attention as a currency: the notion that attention can somehow replace exposure as a metric. As I discussed with Lumen’s Mike Follett at last year’s virtual event, there are three main reasons why it is unlikely ever to become a joint currency.

Firstly, the media agencies like to be able to differentiate themselves, to show their approach is better than their rivals. Commoditising and standardising a key part of media planning removes that point of difference.

Secondly, even if a currency did try to implement it, the currencies are primarily funded and governed by the media owners. Any pilot would reveal who the attention winners and losers would be and the losers would simply block its implementation. Exposure may be bland and simple but with a good methodology and a high-quality sample, it is harder to contest than more qualitative measures.

Finally, attention makes sense for planning, but it could never be used for trading as it effectively de-risks the whole process of advertising. If an advertiser produces a terrible ad, then it will get low attention and so they will pay the media owner less. That is hardly fair.

So, yes, attention has a big role to play, but in my view that role is supplementary, building on top of exposure currencies rather than being incorporated within them. Meanwhile, multiple attention measures will co-exist as clients opt for the system that they feel gives them the best competitive advantage and differentiation.

6. Panels may be experiencing a second coming, but we should not be blind to their weaknesses.

In truth the title for our Thursday session could be contested. In reality, panels have never really been away. Nonetheless, in recent years the tendency has been to stress their limitations rather than their strengths, with panels almost seen as a necessary evil, a commoditised component element when compared to the majestic power of big data.

In our session there were convincing arguments that the demise of the cookie in particular means that panels now have a vital role to play in allowing us to track behaviour across devices at an individual level, with full consent to track already obtained –‘consent is king’. No amount of modelling of first-party data sets can give that overview.

Yet this renaissance should not obscure the limitations of panels. As media fragments, sample sizes are challenged, whilst (high-quality) panels are becoming harder and harder and ever more expensive to recruit. In determining whether a panel is needed, we need to evaluate the optimum form and size that will make it fit for purpose: as a source of demographic profiles, of reach across platforms and as a single-source hub for ingesting data sets.

We really need to establish what a panel is good at and what it is not good at. There will be instances when big data sets are providing more precision than a panel can manage. To go back to my second ‘hot take’ above, in acknowledging the continued relevance of panels, we should not fall into the trap of assuming that they are the source of all truth – in some cases other data sources may be more reliable and the panel can be calibrated to them as opposed to vice-versa. Panels are ‘back’ but we need to play to their strengths and not be blind to their weaknesses.

7. There is clear progress on JIC cross-platform measurement but what about the walled gardens?

In our overview of progress towards cross-platform video measurement in different markets on Friday there were convincing arguments being made about the ability to supplement people-meter panels with router meters and streaming census data to measure across devices and platforms. However, few of these presentations even mentioned Google or Facebook, which take the lion’s share of online video advertising revenue.

In theory these JIC implemented systems – or those from broadcasters directly like CFlight – could be extended to incorporate data from the online giants, but there is little evidence to suggest that the approaches adopted could incorporate the walled gardens in practice. The JIC-originated systems require a degree of transparency and auditing that broadcasters are comfortable with, but seemingly is not acceptable to the digital giants.

That is why the VID approach is so significant as it represents the first system for sharing of (virtual) data that is acceptable to both Google and Facebook. It stops short of true transparency, but it does seem to be acceptable to the advertisers driving the WFA initiative. So how can these two elements – broadcaster BVOD data and data from the walled gardens – be brought together? Any system will require a degree of compromise, but increasingly it seems that the battle lines are already drawn. Is there a real risk of the worst case scenario that our Friday chair Richard Asquith floated: different ‘endorsed’ cross-platform video measurement systems which measure essentially the same thing but in different ways and will of course produce different audience data. How will we avoid trading descending into chaos?

8. The WFA North Star will be reached via entirely different business models in the UK and the US.

Whilst Phil Smith of ISBA and Natalie Bordes of ANA successfully communicated the commonality of their desire and methodological approaches to bring cross-media measurement to their respective markets, in the subsequent discussions it became clear that the business models that can bring this about will be entirely different.

In the UK, the established JIC model is being followed, with the industry being asked to come together to jointly fund the set-up of Project Origin. Debate rages as to who pays what, both for set-up and ongoing, with the advertiser levy under discussion that was first proposed by Brian Jacobs at our 2019 Prague conference. However, the basic concept of the industry coming together is established.

In the US this approach is illegal. JICs are legally seen as cartels and examples of market collusion. This means two things. Whilst the ANA can endorse tests from companies like Comscore and VideoAmp – with no doubt others to follow – it can’t commission a service. That will have to be established and sold by the vendor itself. Clearly there will be substantial financial risk involved for those services deemed by ANA – and presumably MRC – to be fit for purpose. And, yes, I do mean services plural as it may well be that a number of companies run services that are approved to use the VID model and ingest Google and Facebook virtual data. The free market will win out, but that further increases the risk for companies if they are not the solution, but a solution. Contrast that with Kantar in the UK who will simply be a contracted panel supplier to Origin. As someone who was at AGB at the time of its failed attempt to establish a US TV measurement service in the 90s, I know how risky setting up major services in the hope of subscribers can be.

9. Rip it up and start again?

In our Thursday discussion, VideoAmp’s Josh Chasin acknowledged that panels still have a role to play, but argued that measurement needs to be reinvented, that we should ideally wipe the slate clean and build new measurement systems from scratch, designed to meet the needs of the current eco-system, as opposed to attempting to adapt existing legacy systems to make them fit for purpose. Effectively it’s the equivalent of the joke about the tourist who asks the local for directions and is told ‘Well I wouldn’t start from here’.

So, should we rip it up and start again, or would we be throwing away firm foundations built with decades of knowledge and expertise? (I realise this makes media research sound like the Sagrada Familia in Barcelona, admirable but baroque and never ending!)

Personally, my answer would depend on whether I was talking from an American perspective or from that of the rest of the world. The ‘building on strong foundations’ argument for evolution not revolution is far more persuasive outside of the US, whereas measurement in the States is clearly in a state of crisis at the moment. The ANA/WFA initiative is in danger of being conflated together with the mini-revolution against the current Nielsen system, with one of the rebel leaders, NBCU’s Kelly Abcarian, claiming to have received 100 tenders for replacements.

The argument for wiping the slate clean and building again will be more attractive to the US industry and I would lay the blame for the current predicament with the interpretation of US law that makes JICs illegal (see point 8 above). It has made it harder for the US industry to move as one and, in particular, it creates an antagonistic situation where the industry rages against the Nielsen machine as opposed to supporting and changing from within a currency they themselves fund and govern.

As a result, the US has lagged the rest of the world in measurement innovation for a while now.  A clean slate in the US will achieve little and may even be a step into anarchy unless it coincides with a concerted effort by the industry to change the accepted interpretation of the law on JICs, currency measurement and shared investment. Otherwise, any new dawn could lead to the potential chaos of multiple currencies as opposed to the industry building consensus around – and investing in – one optimal solution.

10. In-person events are still the best.

Yes, we had a lot of great feedback on the event and in particular delegates appreciated the live nature of the format with the ability for delegates to interact directly with the speakers. That live format was fraught with risks, as inadvertently broadcast expletives aimed at my laptop made clear but, hey, at least it showed it was live. Nonetheless, from our perspective it can be a more shallow experience compared to the interactions at our live events. That’s why we are desperately hoping that by November next year as many of you as possible will be able to join us in Nice. It will be great to get the gang together after two gap years.

We can’t wait – there is still much to discuss.

The 2022 asi International Radio & Audio and Television & Video Conferences will be held on 2nd to 4th November 2022.