Harker's Corner Archives - Crowd React Media https://crowdreactmedia.com/category/harkers-corner/ Cut Though the Noise Thu, 06 Jun 2024 20:52:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://crowdreactmedia.com/wp-content/uploads/2021/09/crm-logo-dark_400x400-150x150.jpg Harker's Corner Archives - Crowd React Media https://crowdreactmedia.com/category/harkers-corner/ 32 32 PPM’s Convoluted Incentives https://crowdreactmedia.com/radio/ppms-convoluted-incentives/ https://crowdreactmedia.com/radio/ppms-convoluted-incentives/#respond Thu, 06 Jun 2024 20:52:44 +0000 https://harkerbos.com/?p=713 PPM’s Convoluted Incentives If a Nielsen PPM panelist wants the cash to keep coming, she or he has to do one thing: Keep the meter moving. The device that panelists carry has an accelerometer just like a Fitbit and other fitness tracking devices. In the same way a Fitbit can count your steps a PPM […]

The post PPM’s Convoluted Incentives appeared first on Crowd React Media.

]]>
PPM’s Convoluted Incentives
If a Nielsen PPM panelist wants the cash to keep coming, she or he has to do one thing: Keep the meter moving. The device that panelists carry has an accelerometer just like a Fitbit and other fitness tracking devices. In the same way a Fitbit can count your steps a PPM meter (and Nielsen in turn) knows how active a panelist is.

The accelerometer is Nielsen’s spy tattling on the panelist if he or she isn’t moving enough. If a PPM meter sits around for too much of the day the panelist is going to be contacted by a member of the Nielsen Panel Relations Team and “reminded” that the panelist MUST carry the meter “during all waking hours.” If a team member has to remind a panelist too many times the panelist can be removed from the panel.

Mind you, the meter need not detect a single moment of encoded audio for the duration of a panelist’s participation (up to 26 months) to stay on the panel. In fact, Nielsen goes out of its way to tell panelists that they need not consume media to earn incentives and premiums. He or she just must keep the meter moving to keep making money.
This creates a twisted incentive for panelists to game the system. Ex-panelists tell stories of giving the meter to their children or even their pets to keep the Nielsen Panel Relations Team at bay. One social media post shared a panelist’s solution:

(The panelist) tied the scanner to the dog collar. The dog followed the owner around the house so the person figured it was accurate >95% of the time.

Maybe it’s a true story. Maybe it’s not but carrying the meter all “waking hours” is a challenge for even the most conscientious panelist so it shouldn’t be surprising if a child or pet ends up with the chore.

The post PPM’s Convoluted Incentives appeared first on Crowd React Media.

]]>
https://crowdreactmedia.com/radio/ppms-convoluted-incentives/feed/ 0
Nielsen’s Fractured Fairytales https://crowdreactmedia.com/radio/nielsens-fractured-fairytales/ https://crowdreactmedia.com/radio/nielsens-fractured-fairytales/#respond Mon, 06 May 2024 14:23:00 +0000 https://harkerbos.com/?p=703 Time Spent per Occasion, TSPO for short. It’s a PPM metric that you have probably never heard about. You won’t find it in any of the Nielsen monthly market reports. It doesn’t appear in any Nielsen literature. To our knowledge the only time TSPO was shared with the radio community was in 2011 when Arbitron […]

The post Nielsen’s Fractured Fairytales appeared first on Crowd React Media.

]]>
Time Spent per Occasion, TSPO for short. It’s a PPM metric that you have probably never heard about. You won’t find it in any of the Nielsen monthly market reports. It doesn’t appear in any Nielsen literature. To our knowledge the only time TSPO was shared with the radio community was in 2011 when Arbitron released a report called PPM Top Performers: Key Indicators of Highly Rated PPM Stations.

To Arbitron’s credit, the company created many reports like this in the early days of PPM sharing their observations. We miss Arbitron’s openness in explaining PPM.

This report is unique because it turned out to be a valuable look under the hood of PPM. It gave us insights into the mechanisms of PPM, perhaps more than Arbitron intended.

The study’s goal was to analyze PPM metrics of top-rated stations to see how they differ from other stations. One of the metrics they looked at was Time Spent per Occasion. Time Spent per Occasion is the average length of time that panelists spend listening to a radio station before they switch stations or turn the radio off.

As far as we know it is the only time that Arbitron or Nielsen has released data on this metric, and in a moment you’ll see why.

Arbitron found that across all PPM markets the average radio station had a time spent per occasion of 10 minutes or less. That means after 10-minutes the average listener either switched to another station or turned the radio off all together.

When Arbitron released this finding we were surprised. Keep in mind this was 2011. Radio was riding high with minimal competition. It seemed too brief to us. Ten minutes is three songs on a music station, less than a single segment on a talk station. Could it be that the typical listener is that fickle?

Ten minutes seemed too brief, but this was the average across 2,333 stations in all 48 PPM markets. Perhaps listening spans with those stations at the bottom of the rankers explain the short spans.

But what followed next in Arbitron’s research really shocked us.

Arbitron also claimed that listening spans for the top three stations in PPM markets were also 10-minutes. Just 10-minutes. Across 48 of the top American markets, the top 144 stations had listening spans of only 10-minutes. Even eliminating the poorly rated stations didn’t raise average listening spans.

Really?

And it didn’t stop there. The average across the top number one stations in the 48 PPM markets also had listening spans of 9 or 10 minutes! To top it off, both 18-34s and 25-54s listened for…guess….9 or 10 minutes! Stop here and think about that for a moment.

How can all stations in PPM markets, the top three stations, the top stations, and both young and old listeners all have identical listening spans?

As you might expect, the implications of Arbitron’s finding rippled through the industry. If listeners really tune in for only ten-minutes what is the point of quarter-hour maintenance? Why try to keep your listeners for another quarterhour if they are going to be gone regardless?

HarkerBos Research was very skeptical of Arbitron’s conclusions and cautioned our clients to resist the urge to program their stations based on a 10-minute listening span.

There’s no reason to believe that results of a similar study regarding PPM today would differ. PPM protocols have changed very little during the intervening years. If anything, the few Nielsen “tweaks” would likely shorten TSPO.

We believe that the 10-minute TSPO does not reflect true listening spans. It more likely is a consequence of PPM meter limitations. PPM is not measuring listening. It is measuring exposure. Under ideal circumstances it can capture a fair amount of exposure, but despite repeated assurances to the contrary it is clear that PPM does not capture all exposure.

We knew that meters lose contact from time to time, but we didn’t know how often. Arbitron’s finding that stations from top rated to stations at the bottom of the ranker get credited with the same 10-minute listening spans (we hope across two quarter-hours!) suggest that on average the meter seems to lose contact every 10-minutes.

We do not think Arbitron intentionally tried to mislead the radio community. The study was a sincere attempt to provide useful information regarding PPM and how to succeed in this new PPM world. The authors along with most of radio at the time were just blind to the limitations of PPM.

Top rated radio stations succeed because they are better and more liked than their competitors. A station with a good product that’s well targeted will hold on to listeners longer than its poorly targeted competitors. Don’t believe Nielsen’s fractured fairytale.

The post Nielsen’s Fractured Fairytales appeared first on Crowd React Media.

]]>
https://crowdreactmedia.com/radio/nielsens-fractured-fairytales/feed/ 0
Accuracy, Precision, and why Nielsen numbers deliver neither https://crowdreactmedia.com/radio/accuracy-precision-and-why-nielsen-numbers-deliver-neither/ https://crowdreactmedia.com/radio/accuracy-precision-and-why-nielsen-numbers-deliver-neither/#respond Mon, 04 Mar 2024 01:01:46 +0000 https://harkerbos.com/?p=640 It’s easy to confuse accuracy with precision. At first glance, they may seem to be similar or at least related. However, they are very different, particularly when it comes to radio ratings. Ratings can be accurate but not precise. Ratings can be precise and not accurate. And unfortunately for radio, Nielsen ratings are neither. To […]

The post Accuracy, Precision, and why Nielsen numbers deliver neither appeared first on Crowd React Media.

]]>
It’s easy to confuse accuracy with precision. At first glance, they may seem to be similar or at least related. However, they are very different, particularly when it comes to radio ratings. Ratings can be accurate but not precise. Ratings can be precise and not accurate. And unfortunately for radio, Nielsen ratings are neither.

To accurately and precisely determine how many people listen to your station Nielsen would need to question every person in your market. Obviously, that isn’t practical, so Nielsen, like other pollsters, recruits a relatively small number of people to either carry a PPM device or write down the stations they listen to. The company tries to recruit a representative cross-section of the market, but it isn’t easy. Most people don’t want to carry a meter or record their listening for a week.

To entice participants, Nielsen offers monetary incentives. But even with incentives, Nielsen struggles to create a panel of participants that closely match a market. The young and people of color are particularly difficult to recruit.

To make their panels representative, Nielsen likes to slice markets into very thin pieces trying to recruit based on various combinations of age, sex, ethnicity, different size households, access to internet, plus other criteria making recruitment that much harder. Combine that with the company’s relatively small number of participants and it is inevitable that many groups will be represented by a tiny number of participants.

In some dayparts we’ve seen important listener segments represented by a single person.

Nielsen claims the numbers are accurate (“radio’s currency,” they say), but the very fact that the numbers are based on the behavior or recollection of a relatively small proportion of a market’s listeners means that the numbers can’t be accurate.

It’s true of every poll, but at least most polls acknowledge the fact by providing the margins of error, a measure of how far the numbers may be from the true answer. (Nielsen does too but try to find it. It’s buried deep and only available to subscribers. On top of that, their error estimates ignore the majority of factors that might invalidate their estimates.)

Further compromising the utility of the numbers is the fact that even if 6+ numbers were accurate, stations focus on specific demographics, the small slices of the market that make up their audience. Margins of error, the estimate of how far a data point can be from the truth widen as we slice the pie.

For example, if a station targets women 18-34 the number of panelists that contribute to the numbers is a fraction of the total “in-tab.” Nielsen has historically fallen short in younger demos so any station targeting younger listeners is relying on the behavior of a small number of target listeners.

So if Nielsen numbers are only an estimate subject to error and uncertainty, why are shares carried out to a decimal point as if they are precise? Can Nielsen claim that there is a real audience size difference between one station with a 2.4 and a second station with a 2.6? No, but carrying shares out to a tenth makes the numbers look more precise than they really are.

If all shares were rounded to whole numbers, the rankers would be more accurate in the sense that whole numbers more accurately capture the reality that shares cannot be determined to one-tenth. The problem for Nielsen is that we would then have many ties.

Let’s say your station has a 3.2 share. Depending on the number of active meters or diary keepers and the demographic you are looking at, the 3.2 share may actually be (for example) anywhere from a 4.0 to a mid-2 share. Your competitor might have a 3.4 share giving the illusion that they have more listeners. However, it’s really a tie. All we really know is that both stations have about a 3 share.

If your station is ranked outside the top five or targets a narrow portion of the market, your share could be every further from your actual market share.

Nielsen trends can be particularly pernicious because of broad margins of error. How would you react to this trend: 5.2, 4.8, 5.0, 4.6? Would you panic? Would you start questioning your programming decisions? In reality the station may be just as strong in the fourth month as the first. The wobbles are just that, all within Nielsen’s margin of error.

While our examples are hypothetical, you need only look at monthly trends to see this in action. Each month there are significant inexplicable share swings where a station has a good book for no clear reason only to crater the next month for no clear reason.

When pressed Nielsen will suggest that it is best to average monthly numbers to smooth out the swings, but wouldn’t it be better if Nielsen smoothed out the swings before claiming their numbers are radio’s currency? And averaging several months may not even-out the swings. Our analyses suggest that Nielsen estimates can be heading in the wrong direction for multiple months.

Radio stations that make programming and marketing decisions based on Nielsen trends are deluding themselves and in all probability taking the station in the wrong direction.

The post Accuracy, Precision, and why Nielsen numbers deliver neither appeared first on Crowd React Media.

]]>
https://crowdreactmedia.com/radio/accuracy-precision-and-why-nielsen-numbers-deliver-neither/feed/ 0
Is Nielsen Picking Radio Format Winners & Losers? https://crowdreactmedia.com/radio/is-nielsen-picking-radio-format-winners-losers/ https://crowdreactmedia.com/radio/is-nielsen-picking-radio-format-winners-losers/#respond Mon, 26 Feb 2024 15:53:56 +0000 https://harkerbos.com/?p=632 Can a company measuring radio station listenership impact the success of music formats? It’s a question Harker Bos Group raised in 2007 when we learned that the largest 50 radio markets would no longer be measured via diary. Instead, with the new method, the Portable People Meter (PPM), stations would encode their programming with an […]

The post Is Nielsen Picking Radio Format Winners & Losers? appeared first on Crowd React Media.

]]>
Can a company measuring radio station listenership impact the success of music formats?

It’s a question Harker Bos Group raised in 2007 when we learned that the largest 50 radio markets would no longer be measured via diary. Instead, with the new method, the Portable People Meter (PPM), stations would encode their programming with an “inaudible” identifying code that pager-like devices carried by panelists would detect.

We expressed concern that issues regarding the encoding/decoding process could benefit some formats and penalize other formats. To maintain its “inaudibility” the identifying code “rides” on a station’s programming with the level determined by the loudness of the programming. We believed that louder highly compressed formats would have an advantage over more dynamic less compressed formats.

It didn’t take long before our fears seemed justified. Some stations that performed well when measured by diary plummeted with the switch to PPM. At the time we were told that PPM was more accurate and that stations that suffered under PPM had been boosted by the diary.

The more we learned about the inner workings of PPM the more we questioned that explanation. Nielsen claimed that “if the listener can hear the radio station, PPM can too” but assurances are not proof. It seemed more likely that PPM was failing to capture all listening, that its ability to identify stations was somewhat dependent on the type of programming a station delivered.

General Managers, Program Directors, and Chief Engineers soon figured out how to game PPM with aggressive processing to make the “inaudible” codes as consistent as possible. Fidelity took a back seat to making sure that the product was as consistently loud and compressed as possible. This was practical for some formats but not for background formats.

Efforts to game PPM drove innovation, and several years into the PPM era one company, 25-Seven, developed the Voltair, a processor that made the “inaudible” codes more robust. Stations that used the processor saw a gain in audience, but even with Voltair gains varied by format.

Historically some of the most successful radio formats have been soft relaxing music. Many markets had multiple stations in formats such as Smooth Jazz and Soft AC. The formats were ideal for the many listeners who used radio as background, to fill the quiet while working or relaxing.

Harker Bos Group recently conducted a national study exploring media consumption examining radio and other audio source consumption. We found a significant proportion of listeners gravitating to music formats beyond the few available to radio listeners.

Has PPM forced radio to focus on a few PPM-friendly formats and avoid other formats that don’t register on PPM meters as well? Are there other format opportunities that could attract an audience were it not for PPM’s favoritism? Could radio expand its reach if it wasn’t shackled to PPM? We will have more to say on this topic in future blogs.

The post Is Nielsen Picking Radio Format Winners & Losers? appeared first on Crowd React Media.

]]>
https://crowdreactmedia.com/radio/is-nielsen-picking-radio-format-winners-losers/feed/ 0