James Cridland: Radio should be where the speakers are

NPR One app on Amazon Echo smart speaker. Photo: NPR

James Cridland: Recently, my columns have been quite full of numbers and figures and statistics, so instead, this week, I’d like to do some thinking aloud instead. I have a theory. It has almost no science behind it – yet; and probably very little proof – but nonetheless, I think it’s true.

I think that, for live radio, we should be focusing on getting our output on speakers, not headphones. Our distribution strategy for live radio should be carefully honed to make sure that we get on as many speakers as possible.

So, why would I say this?

Well, look, it’s only a theory. But it goes a bit like this.

Radio is built to be a multi-tasking medium. You typically listen while you’re doing something else, like driving, cooking or working in a store. Headphones are impractical in these situations, for a variety of reasons – the wires get in the way, the earbuds block out noise and stop conversations. Speakers work quite nicely in this environment.

While many computer programmers wear headphones to help them concentrate, a more typical office worker has more interruptions: the telephone, colleagues asking questions, popping over to the coffee machine, etc. Headphones don’t work well here, since you need to pull them off in order to interact with others. Speakers, though, are typically not loud enough to interfere.

Radio’s highly processed audio is designed to work well on speakers, and produces a constant, clear and intelligible sound. On headphones, however, it’s quite a tiring experience. A long set of commercials, particularly, is a difficult listen on headphones (try it), while on a radio in the corner of the room you tend to deal with it rather better for some reason.

Most importantly, using headphones mean we are tethered to our audio device – by wire or by Bluetooth. That therefore inevitably means that we are within arm’s reach of the device. In those circumstances where headphone-wearing works well, like sitting on public transport or waiting in a queue, our devices are specifically close enough to fiddle with, and our eyes and fingers also need something to do. This lends itself to interactive experiences on a screen (like YouTube, or a game, or Facebook); and doesn’t work too well with a live but otherwise unchangeable stream.

So: what does this mean in a practical sense?

First – smart speakers, like Amazon Echo, Google Home, or Sonos? These devices are the new radio: they’re here to build your TSL. Ensure you’re available on them (in most cases, you should be thinking about a country-wide technology like Radioplayer, which has all this stuff built-in and the radio industry ends up owning the platform).

That also means trying to get your station onto the telly – which is just a speaker in a box in your living room. Any country with digital TV potentially can shave a little tiny bit of bandwidth off a TV channel and sell it to you, so you can get your audio on there too (and a place in the channel listings). 14% of Brits listen to radio on the TV every week – it’s a proven platform.

This might also mean that you should focus on a really good tablet app. Yes, tablets are nowhere near as popular as mobile phones; but they have decent speakers, and are often used without headphones. Put the live stream front-and-centre here, perhaps, rather than lots of opportunities for on-demand content.

… read on at radioassistant.com

Originally posted by at RadioAssistant

James Cridland will be speaking at the 2017 asi International Radio & Audio Conference in Nice, France, on 8th November