Accuracy, Precision, and why Nielsen numbers deliver neither

Accuracy, Precision, and why Nielsen numbers deliver neither

It’s easy to confuse accuracy with precision. At first glance, they may seem to be similar or at least related. However, they are very different, particularly when it comes to radio ratings. Ratings can be accurate but not precise. Ratings can be precise and not accurate. And unfortunately for radio, Nielsen ratings are neither.

To accurately and precisely determine how many people listen to your station Nielsen would need to question every person in your market. Obviously, that isn’t practical, so Nielsen, like other pollsters, recruits a relatively small number of people to either carry a PPM device or write down the stations they listen to. The company tries to recruit a representative cross-section of the market, but it isn’t easy. Most people don’t want to carry a meter or record their listening for a week.

To entice participants, Nielsen offers monetary incentives. But even with incentives, Nielsen struggles to create a panel of participants that closely match a market. The young and people of color are particularly difficult to recruit.

To make their panels representative, Nielsen likes to slice markets into very thin pieces trying to recruit based on various combinations of age, sex, ethnicity, different size households, access to internet, plus other criteria making recruitment that much harder. Combine that with the company’s relatively small number of participants and it is inevitable that many groups will be represented by a tiny number of participants.

In some dayparts we’ve seen important listener segments represented by a single person.

Nielsen claims the numbers are accurate (“radio’s currency,” they say), but the very fact that the numbers are based on the behavior or recollection of a relatively small proportion of a market’s listeners means that the numbers can’t be accurate.

It’s true of every poll, but at least most polls acknowledge the fact by providing the margins of error, a measure of how far the numbers may be from the true answer. (Nielsen does too but try to find it. It’s buried deep and only available to subscribers. On top of that, their error estimates ignore the majority of factors that might invalidate their estimates.)

Further compromising the utility of the numbers is the fact that even if 6+ numbers were accurate, stations focus on specific demographics, the small slices of the market that make up their audience. Margins of error, the estimate of how far a data point can be from the truth widen as we slice the pie.

For example, if a station targets women 18-34 the number of panelists that contribute to the numbers is a fraction of the total “in-tab.” Nielsen has historically fallen short in younger demos so any station targeting younger listeners is relying on the behavior of a small number of target listeners.

So if Nielsen numbers are only an estimate subject to error and uncertainty, why are shares carried out to a decimal point as if they are precise? Can Nielsen claim that there is a real audience size difference between one station with a 2.4 and a second station with a 2.6? No, but carrying shares out to a tenth makes the numbers look more precise than they really are.

If all shares were rounded to whole numbers, the rankers would be more accurate in the sense that whole numbers more accurately capture the reality that shares cannot be determined to one-tenth. The problem for Nielsen is that we would then have many ties.

Let’s say your station has a 3.2 share. Depending on the number of active meters or diary keepers and the demographic you are looking at, the 3.2 share may actually be (for example) anywhere from a 4.0 to a mid-2 share. Your competitor might have a 3.4 share giving the illusion that they have more listeners. However, it’s really a tie. All we really know is that both stations have about a 3 share.

If your station is ranked outside the top five or targets a narrow portion of the market, your share could be every further from your actual market share.

Nielsen trends can be particularly pernicious because of broad margins of error. How would you react to this trend: 5.2, 4.8, 5.0, 4.6? Would you panic? Would you start questioning your programming decisions? In reality the station may be just as strong in the fourth month as the first. The wobbles are just that, all within Nielsen’s margin of error.

While our examples are hypothetical, you need only look at monthly trends to see this in action. Each month there are significant inexplicable share swings where a station has a good book for no clear reason only to crater the next month for no clear reason.

When pressed Nielsen will suggest that it is best to average monthly numbers to smooth out the swings, but wouldn’t it be better if Nielsen smoothed out the swings before claiming their numbers are radio’s currency? And averaging several months may not even-out the swings. Our analyses suggest that Nielsen estimates can be heading in the wrong direction for multiple months.

Radio stations that make programming and marketing decisions based on Nielsen trends are deluding themselves and in all probability taking the station in the wrong direction.

Richard Harker