By Danny Sullivan
Supported by the recent news of AOL’s acquisition of popular social networking site Bebo for $850 million, it is clear that the phenomenon continues to be considered an influential and valuable market segment.
And there is little doubt that social networking represents a significant new marketing channel for consumer-oriented products companies. Indeed, there are already many strategies and approaches out there advising on just such a topic, and viral marketing has never been so popular as a result.
But what does it mean for the B2B sector? Are social networking strategies an important PR component for companies selling enterprise software solutions or new server technology?
Well, we’re probably not talking Facebook or Bebo here. As far as I’m aware, these sites don’t yet host large groups of CIOs swapping tales of ROI and disaster recovery strategies.
But social networking extends beyond the realm of those well-known sites, and there are networks catering to members holding virtually every job description known to humankind.
Still, although such networks do present opportunities for PR professionals, any strategy to target them should be carefully considered. You can’t join a networking group and then start spamming everyone with your news releases… well, I suppose you can, but you won’t be in the group for very long!
A consumer product company will seek to generate excitement and interest for its products among social networkers in its target demographic, and a B2B vendor must seek to do the same. And in doing so to a (usually) well-educated and (almost always) sceptical audience, the approach and method must be well thought out.
As with contributions to conventional media outlets, companies seeking to influence such networks must present value to the audience and should not embark on blatant pitches for business. Providing relevant, interesting and thought provoking content is key.
Social networking is all about sharing – and this philosophy must be upheld by those seeking to take advantage of it.
By Francis Moran
As part of my continuing series of Francis’s favourite PR fictions, subtitled “Everything I know that’s wrong about PR I learned from technology company executives,” I have written a couple of posts on PR measurement addressing the common myth that straight lines can’t be drawn between a company’s PR efforts and any kind of real evaluative yardsticks. I return to the topic today because I am getting some interesting comments on the subject. Clearly, it’s something that people are keen to explore.
Our approach here at inmedia is to measure outputs, outcomes and impact. In my first post, I described what we mean by outputs, which are little more than the critical path, or a list of how much PR stuff the client is buying. While most PR agencies and practitioners will set clear parameters for their outputs, too few are prepared to go any further than that.
We insist that every program go at least one step beyond this minimal evaluation to set, and measure performance against, objectives for outcomes, or the amount, nature and content of the media and analyst coverage our efforts are expected to generate. In my more than 20 years as a communications practitioner, I have found distressingly few others who will commit to being held accountable for the actual results of their programs in clear, unambiguous terms that allow the client to make a rational ROI analysis about whether the promised level of media and analyst engagement is worth the cost of the program.
Fortunately, there is a growing and increasingly sophisticated audience of both practitioners and clients insisting on this. Many are deploying simple yardsticks that go well beyond what I call “thud value,” or the noise the clippings book makes when you drop it on the boardroom table in the hopes the client will be impressed by the sheer number of column inches. These yardsticks, which we commonly use, include determining which media outlets and analyst firms are the most influential — we designate them Tier 1 — and then telling the client exactly how well the program is expected to do in terms of percentage of Tier 1 targets engaged, types of stories, the nature of the messaging, numbers of analyst briefings, speaking engagements, and so on.
Many practitioners go well beyond this to provide granular analysis of the actual content of the media coverage. Although few of our B2B technology clients generate the volumes of media coverage that make such a statistical exercise either practical or meaningful, I am a huge advocate of media content analysis as both a strategic research and a program evaluation tool. I will write more about this topic in a future post on PR measurement because it deserves fuller treatment.
My second post described how even measuring outcomes often falls short of meaningful evaluation, especially in cases, admittedly rare but real nonetheless, where there is masses of coverage but no persistent impact on the client’s business objectives.
Which brings me to the final, most critical, hardest to implement and most elusive category of objectives we strive to track, impact. I will present case studies over my next several posts to illustrate how many of these have been used to help our clients calculate a reliable and meaningful ROI on their PR spend, but here is a range of common metrics that can be used to measure the impact a program has on everyday business objectives:
- Web traffic, measured in hits to a company site, Google mentions, search engine rankings, and so on.
- Demand creation, or what used to be known as lead generation. I like the newer term because it distinguishes between mere enquiries and actual demand for the product or service.
- Sales cycle acceleration.
- Customer interest in the media coverage.
- Investment secured.
- Increased sales, revenues and profit. (Now THAT is what we’re really talkin’ about!)
I’d be intrigued to hear from others as to what they think of these metrics, and also to hear about other yardsticks that are used. Subsequent posts will deal with how the data required to deploy these metrics can be gathered, as well as presenting, as mentioned, specific case study examples.
By Francis Moran
About a month ago, as part of my continuing series of Francis’s favourite fictions, I tackled the too-widely held myth that public relations can’t be measured. I described how, at inmedia, we establish a critical path, or set of outputs, for every project and ongoing program that allows our clients to certify that we’re exerting the amount of effort we said we would. This, I said, was a good starting point for program measurement, but a woefully inadequate one.
I went on to describe what we call outcomes, a set of clear and unambiguous objectives we set that tell our clients what they should expect by way of actual coverage by our target media and analysts, with more granular objectives established for specific program elements such as news releases, product launches, contributed articles, speaking programs, trade show support and so on. Applying such an approach turns the whole PR value proposition on its ear; instead of a cost centre that should be managed down to its minimum, a client can now view the PR function as an investment centre, and can answer the question, “Are these results, or outcomes, a sufficient return on the investment my PR agency or department is asking me to make?”
In my earlier post, I promised to go even further than this, to approach the holy grail of ROI measurement. What does it matter, I asked, if we achieve the outcomes we projected but the media and analyst coverage hasn’t advanced our clients’ business objectives? Or, maybe even worse since decisions then can’t be made about whether or not to continue the program, what if we can’t tell whether our clients’ business objectives are being advanced by our PR efforts?
In my practice, it is simply unacceptable that we not be able to measure the impact our PR program has on specific business objectives such as demand creation, web traffic, sales-cycle acceleration, human resources recruitment and retention, share price and, yes, even sales, revenues and profits. Let me share with you a really good case study.
We used to have a client whose managed service allowed large enterprises to inventory all their IT assets; not just desktops, laptops and servers but all peripherals, operating systems and applications, including versions and licenses. As a managed service, our client had a massive database that, in aggregate, yielded highly reliable insight into certain IT-related issues within corporate America. The company’s budget with us was very small, so our program consisted of identifying the occasional high-profile IT issue, commissioning a report that demonstrated how pervasive that issue was, and generating media coverage around it.
One our first efforts was in the wake of the Recording Industry Association of America’s announcement that it would sue not just individuals but also companies whose employees were using peer-to-peer applications to download copyrighted material. Our client’s data suggested that the use of such applications within corporate America was quite widespread, and our news release announced our client was making available a free subset of its managed service that would tell IT managers how pervasive P2P applications were within their environments.
The story went global and the market’s response was nearly overwhelming as our client had to babysit its servers to manage the demand for its little report. Huge impact on our client’s business, right?
Not so much.
While initially overjoyed, our client soon realized that very few of those who downloaded the free application were signing up as paying customers. Here was a textbook example of our level of effort, or outputs, being exactly right; the coverage results, or outcomes, being unbelievably massive; but the ultimate return for the client, or impact on its real business objectives, being negligible.
Now let me tell you about the same client, different story, fundamentally different result.
When Microsoft announced it was withdrawing support for its Windows 95 operating system, we went to work again. Our client’s database told us that Win95 was still installed on a hefty percentage of computers and that migrating to Windows XP, which is what Microsoft wanted its customers to do, might not be straightforward since there were a lot of applications deployed in the environment, many of them home-grown, that would function only on a Win95 OS. Again, our client made available a free download that would tell IT managers something about the pervasiveness of Win95 and its dependencies in their environments, the point being that they could then subscribe to the full service that would help them map a migration path to XP.
Well, as Victor Kiam used to say, Microsoft loved the product so much it bought the company! But I’m getting ahead of myself.
Once again, the media coverage of our client’s announcement was truly global. Once again, the demand for its free application was considerable, although less than half what was seen for the P2P app. And once again, very few of the freebies converted to revenue. But one did, and that one was the world’s largest software company, which bought thousands of licenses and gave them away to large Win95 customers specifically so they could use it to map their migration strategy to XP. And, as already mentioned, a year or so later, Microsoft, which previously had been unaware of our client, bought the entire company in a tidy exit for our client’s founders and investors.
Sadly, we lost a client, but we gained a persuasive case study illustrating that outcomes, while a potent indicator of the ROI of a PR program, can be misleading; that only by measuring the impact can the real ROI be authoritatively calculated.
Since not every case produces the kind of clear and dramatic impact discussed here, I’ll come back to this subject in future posts and show many other ways, some quite prosaic but no less legitimate, in which the impact of PR activities can be effectively measured.


By inmedia
Earlier this week, Brodeur announced new findings on how blogs are influencing traditional journalists. According to Jerry Johnson, head of strategic planning at Brodeur, “While only a small percentage of journalists feel that blogs are helpful in generating sources or exclusives, they do see blogs as particularly useful in helping them better understand the context of a story, a new story angle, or a new story idea. It appears that reporters are using blogs more for ethnographic research than they are for investigative research.”
Here are some highlights from the ongoing research project by Brodeur in conjunction with Marketwire:
- The majority of journalists said blogs were having a significant impact on news reporting in all areas tested – except news quality: The biggest impact of blogs is in the speed and availability of news. Over half also said that blogs were having a significant impact on the “tone” (61.8%) and “editorial direction” (51.1%) of news reporting.
- Blogs are a regular source for journalists: Over three-quarters of reporters see blogs as helpful in giving them story ideas, story angles and insight into the tone of an issue.
- Nearly 70% of all reporters check a blog list on a regular basis: Over one in five (20.9%) reporters said they spend over an hour per day reading blogs. Nearly three in five (57.1%) reporters said they read blogs at least two to three times a week.
- Journalists are increasingly active participants in the blogosphere: One in four reporters (27.7%) have their own blogs and nearly one in five (16.3%) have their own social networking page.
- About half of reporters (47.5%) say they are “lurkers” – reading blogs but rarely commenting.