Sunday, June 6, 2010

What is this thing called 'insight'?

As marketing researchers, we live and breathe the notion of 'insight'. Identifying breakthrough insight is probably the dream of every marketing research consultant on every client engagement or research project. But I often ask myself, what is this thing called 'insight'?

Great marketing research is an art, the word 'insight' probably means different things to different people. Some may see it as a new way of looking at a business problem, some may think of it as  identifying a 'eureka' finding that was previously hidden or wrapped up within the numbers.

But, too often, these 'eureka' moments doesn't translate into action. Too often, brilliant findings don't hit the mark, because they are potentially not actionable (or the client doesn't know how to translate them into action). So the way I look at it, there is one (and only one) litmus test to differentiate 'true' insight from the others.

It is this:
True insight makes action planning superfluous. That is, true insight comes with an instinctive recognition that the research problem has been nailed. Not only does it inspire the research consultant, but also the entire client organisation. The action plan to translate this insight into reality becomes overtly obvious. True insight has the potential to not only sell itself and but also action itself in the client organisation, on its own.

Ultimately, the role of marketing research consulting is to solve business problems. Business problems are only solved if research findings are translated into action. So any eureka piece of finding is potentially not 'insight' at all, if it doesn't leave the boardroom of the client.

Saturday, May 29, 2010

Re-thinking customer satisfaction research

I am a strong believer in the notion that the ultimate goal of any research program is to drive business results. Customer satisfaction research is no different. I came across an interesting perspective on customer satisfaction research in Research Magazine and it got me thinking.

I think, many a times, customer satisfaction is used as a company-wide KPI without truly understanding how it drives business results, i.e. how customer satisfaction links with key business outcomes such as sales, market share, profit or even share price. I think understanding this causality with business measures is the first step in designing an effective program. I think it is crucial for any business to understand the impact that relative levels of customer satisfaction have on the business. For eg. on a 1-5 scale of customer satisfaction, does it make more business sense to convert 'satisfied' customers to 'very satisfied' OR rather to convert 'dissatisfied' customers to 'satisfied'?

One way of identifying the key levers is by understanding the nature of the relationship customer satisfaction has with business measures. This relationship could be one of the following:

Scenario 1 is a classic example of a case where the true focus of research must be on 'dissatisfaction' rather than 'satisfaction', given that higher gains may not be achieved by delivering superior customer experience levels. In such cases, the research must be designed and tailored to focus specifically on the dissatisfied segment, and how to improve their experiences.

Scenario 2 is an example of a business where satisfaction or customer experience is a critical part of the overall offering (for eg. the airline industry). Here, the research program must be more holistic; and designed to not only address poor experience levels but also build and drive superior experiences.

Scenario 3 is an example of a business where customer experience is not part of the overall promise/ offering but where superior customer experience can be leveraged to build competitive advantage. Here, the focus of the business and research programs must be on converting the 'satisfied' into 'very satisfied', and even further into promoters or advocates.

One still needs to track customer satisfaction with all customers as a high-level KPI - however, prioritising further research based on the above will not only help in getting the biggest bang for the buck from these programs, but also help in operationalising the results internally.

Tuesday, May 11, 2010

Is your social media investment worth it?

I recently saw an interesting presentation on the basics of social media by Olivier Blanchard. Personally, I think the framework and approach is really valuable in order to understand the basics of ROI on any marketing program in general.

Quite simply, it is the distinction between non-financial and financial ROI. Even though your social media campaign achieves high click-throughs, visits, online conversations, this does not necessarily imply conversion into sales. This is the same principle as advertising - I mentioned in one of my earlier posts that awareness means nothing if it doesn't convert consumers through the funnel (both in the short and long term). This throws up the question of whether what we should really be measuring is the salience and relevance of the advertising?

Blanchard's presentation is great to have a quick browse through. It tries to make the point that establishing a base line measure of sales/ profits is a great starting point for any ROI measurement program i.e. comparing the baseline level of sales with the lift in sales achieved during the period when these social media activities were engaged in.

However, I think this might be too simplistic, as the lift in sales may not be attributable to the social media alone. Also, not everything is meant to impact in the short term. By definition, brand building is long term, and I think social media is a long term channel. Consumers engage with social media to have conversations and be heard, and not primarily to buy brands. Brand building is the result of these conversations and interactions.

Anyway, I will leave you to enjoy this presentation for now.

Tuesday, April 27, 2010

Marketing: The undervalued or the engine room?

One of the areas in marketing I am immensely passionate about is demonstrating the strategic value-creation philosophy of marketing. By definition, this implies that marketing is more than just sales or advertising, and definitely more than just a P&L expense. Marketing's standing in a firm goes beyond being merely a  business function - it is the driver of the long term sustainable and profitable growth for companies.

Given that marketing must be responsible for long-term growth and profitability, its domain, by definition, can't be restricted to advertising or sales. On the contrary, I look at it as an all pervasive discipline spanning all functions in the company.  However, in reality, there is a high variance in how companies define and treat the marketing function. 


Through my personal observations, I have built up a typology of the different roles marketing plays in companies. Simply put, these can be classified as : the undervalued, the underdog and the engine room. 




The first category is the undervalued. Here, marketing is viewed merely as a communications tool. For these companies, marketing is little more than advertising and promotions, with its main purpose being to influence consumer decision making. I have also observed that it is in these companies that there is the highest pressure to justify marketing ROI and the highest risk of marketing budget cuts, demonstrating the lack of belief and support for the function by senior leadership. Naturally, these companies also tend to be less sophisticated in their marketing organisation and practices.


Then the second category is of the underdog. Here, the marketing function is viewed as a challenger, with increasing emphasis placed on contemporary marketing philosophies and practices. Marketing in these companies encompasses consumer insights, product innovations, strategy and pricing, and is often leveraged to drive long term business growth. The marketing organisation tends to exhibit a high degree of sophistication in these companies. However, inspite of this, marketing lacks a seat on the table in these companies. Finance usually pips marketers in winning top leadership roles in these companies.


And last but not least - I like to refer to the third and most influential role of marketing as the engine room.  Here, marketing is more than a business function - it is the single most important driver of market share, growth and profits; it is a mindset of winning in the marketplace through robust consumer-centric strategies. These companies define marketing broadly to include P&L delivery, and hence, marketers in these companies go on to key top management positions in the company. Given that marketing includes cross-functional business leadership, marketing in these companies tends to be the most sophisticated and analytically driven. These are the companies that acknowledge the value of brand building and consumer insights, and invest heavily in advertising and marketing research. It is the companies that treat marketing as the engine room that are the best training grounds for classical marketing fundamentals.


Ideally, you want to be pushing to the right-hand side of this continuum, if you are to truly drive long-term growth and value through marketing.

Wednesday, April 14, 2010

Marketing's deadliest sin : Measuring the tip of the iceberg in advertising

One of the biggest sins a marketer can commit is scratching the 'tip of the iceberg' in advertising. It surprises me how often clients gauge the impact of their campaigns through superficial metrics such as awareness and message take-out.


Don't get me wrong. Awareness and message take-out are crucial in assessing the first level of impact of the advertising, but these are just the tip of the iceberg, in terms of the actual impact of advertising. One campaign may have an awareness of 90% and the other 60%, but would this mean the latter is less effective? Even though the latter reached less people, it could have had a stronger impact on equity and subsequently on sales.

What goes on 'between' message take-out and the actual sale is referred to as the 'black box' in marketing. It is every marketer's fundamental responsibility to see through (or atleast try to see through) this black box. Advertising is ultimately intended to have the following two impacts: 1. short term sales impact 2. equity impact which then leads to sales in the long term.  So if you are not measuring these when trying to understand the impact of your campaigns, then what exactly are you measuring?

I may remember your ad, may love it and could talk all day long about it. But this still does not mean that the ad strengthened the power of your brand in my mind and if I will buy it. Awareness and message take-out will tell you if your brand reached people, but not if people reached for your brand!

Saturday, April 10, 2010

Measuring the value of insight: Closer to the holy grail - Part II




A few weeks ago I wrote a post on the topic of 'measuring' the value of insight and proposed a conceptual framework demonstrating this concept. One of the key points that I was trying to make through the article was the differentiation between measuring the value of insight versus gauging the effectiveness of marketing research programs. As mentioned earlier, gauging the effectiveness of a program is relatively simple i.e. ascertaining if the research delivered insights that then enabled the company to make certain decisions towards meeting its marketing objectives. Measuring the value of the research is potentially more complex. By definition, it means linking the outcomes of the research program to financial metrics such as ROI, profitability or even shareholder value.

This post is the second of my articles that hope to get one step further towards finding the holy grail !

I have illustrated my framework by using the example of a new product concept testing program.


From the 'insight funnel' above, it is evident that the new product testing program was effective. It enabled the company to successfully launch the new concept and boost sales, which then led to improvement in profits and ultimately shareholder value. But what was the 'value' of this research program?

What makes this topic fascinating is that just as beauty lies in the eyes of the beholder, the value of insight lies in the eyes of the decision maker.. What I mean by this is that the value of a research program is subjective

Let's assume that the company invested $100,000 in the program, and the ultimate net profit impact as a result of the new product was $1 million. Can we attribute this $1 million to the research program? Does this mean that the research delivered a ROI of 10x? The answer is YES, only if research alone led to the new product launch. More often than not, this will not be the case.

There are situations when managers would probably take the same decisions in the absence of research, as they would with the support of research. In the example illustrated above, what if the managers (based on intuition and judgement) would have launched the concept anyway and created communications that resonated with the target? Does this mean that the research program was not valuable?

Absolutely not. In fact, any insight delivered plays the role of mitigating risk or increasing confidence. Given that businesses face multiple decision choices and need to make trade-offs based on the risk-reward potential of decisions,  an increase in confidence leads to an increase in probability of making that decision. If the managers were only 60% confident of success (before the research), the insights from research can said to have increased confidence ( or probability of making decision) by 40%. Hence, if the net profit impact was $1 million, $400 (40%) can be attributed to to research program.

This throws up another potentially complex variable into the mix, negotiation. Negotiating the value of the insights we deliver, in terms of risk reduction and increased confidence, is key in resonating with our clients' business needs. The more robust our programs and insights, the better will be our ability to negotiate, and the higher will be the value of the insights we deliver.


Saturday, March 20, 2010

Making the transition from insights to value creation: How do you stack up?

One of the biggest challenges (or opportunities) facing the marketing research industry today is the blurring of boundaries between research and consulting. Increasingly, clients are expecting research firms to not only provide insights, but also act as consultants in overall business and marketing strategy planning. I believe that firms best placed to thrive in the next decade are the ones that adopt the 'consulting' mindset, rather than the pure 'information' mindset.

Even though most researchers like to think that they are consultants (by default), this is not the case in most instances.  Adopting a consultant's mindset requires a significant face-shift not only in terms of skills, but also in terms of approach and the overall delivery model. 

Quite simply put, I differentiate researchers and consultants based on one fundamental criterion: a researcher works with data, while a consultant works with data gaps. This essentially means that a 'consultant' doesn't rely on the primary data to solve the client's business problem. Solving a business problem goes beyond merely using primary data - it is a combination of data (primary and secondary), expertise, context, judgement and intuition. 

I have a quick audit to determine where one lies on the strategic value creation continuum.

1. Do you know the key forces affecting the client's industry and business in general?
2. Do you know the client's strategic objectives for the next financial year? This not only includes marketing objectives, but also their advertising strategy, media, brand plans. 
3. What is your level of collaboration with the client throughout the research process? Do you attend the client's internal planning sessions?  Do you meet with the client at on-going intervals to discuss findings and brainstorm hypotheses/ possible actions? (rather than going away once the research was commissioned and coming back after 2 months with a magic solution!!).
4. Have you used alternative knowledge sources to address the problem? (i.e. secondary data, client's internal database information, industry context, expertise and judgement). 
5. Do you provide 'insights' versus just 'information'? More importantly, do you provide a strategic action plan for the client based on the findings?
6. Do you help the client operationalise and implement these strategic recommendations? Do you follow up and see if these strategic recommendations were implemented and if they had any value to the client?
7. Finally, and most importantly, would you still be able to solve the client's business problem if there were gaps in the primary data?

If we can answer "yes" to each of these questions, we can be sure to have made the transition from being a source of "information" to a source "strategic value" to our clients.
So, based on the above, where do you stack up?

Sunday, March 14, 2010

"Measuring" the value of insight: It can and must be done, but how? (Part I)

I had the privilege of attending the AMSRS State Conference last week and really enjoyed all the presentations and ideas put forth by the speakers. However, the one that resonated with me the most was the one by  Duncan Rintoul on "The real value of market research". This is a fascinating issue and whilst there have been considerable advancements in measuring the value of 'marketing', the same cannot be said for 'marketing research'. 

The concept of measurement is not a new one, and neither is the idea of measuring the value of marketing research. But the presentation essentially crystallized some of my own thoughts on this topic, and provided a framework for thinking critically about the 'net' value of the research we do for our clients, as marketing research consultants. I think Duncan's approach is an excellent one, and I must commend him for this; not to mention that the idea has actually sparked a lot of discussion both within the agency and the client-side on how this could be taken further and implemented. 

The reason why I have inverted commas around the word measuring in the title of this post is because I wanted to differentiate between the notion of 'measuring the value of insight' versus 'evaluating the effectiveness of marketing research'. A research program is deemed to be effective if it enabled the company to make key decisions, which then translated in the achievement of broader marketing and corporate objectives. Measuring the value of marketing research, on the other hand, is quite different, and potentially more complex. It is about ascertaining the net dollar impact that the research had on the bottom-line of the company.

I think Duncan's presentation provides a good framework for evaluating if the research has been effective, but the point of my post is around 'measuring' the dollar impact of the research on key financial metrics. So essentially, it is the same idea taken to the next stage in the measurement chain.

So why is measuring the net dollar impact of research necessary? Why isn't ascertaining whether the research has been effective or not, enough? Well, quite simply put, it is because as a client, you are always contemplating between potential research proposals and programs. To truly extract maximum value, you need to select the research program that will deliver maximum net returns to the company.

Imagine this scenario: You conducted two separate research programs on brand tracking. Both programs provided insights that enabled in making key brand decisions and hence, helped in achieving your marketing objectives. But, which was more valuable? If you had to choose one over the other, from a purely financial point of view, which one would you choose? I think this is essentially the premise of my argument. 

This is an issue which needs detailed explanation, and I am not going to give a magic answer to this, but I will end by presenting my framework:


I will be sharing my insights and ideas on how to interpret this framework and also on how to tackle the last level in the measurement chain i.e. financial impact, in my next post. So stay tuned for more, and let me know your thoughts and comments in the meanwhile.

Tuesday, March 9, 2010

The age old conundrum: "Measurement stifles creativity!"

I read an article in Ad Age recently ("Why metrics are killing creativity in advertising") which made the audacious claim that "when marketing decisions are based on numbers, we lose the desire to be creative".  Well, the classic battle between the left and right brain marketers continues!

Firstly, from the article it is evident that the author (advertising agency creative exec) is someone who disregards the notion of measurement (rational judgement), for the sake of emotion (which is not always rational). This poses the question: Why is measurement seen as alien to the creative process? 

Being a quantitative person myself (someone who likes objectivity), I must admit that I disagree with the assertions in the article. I do not think that numbers hamper creativity. I view numbers as doing one thing and one thing only: turning our subjective judgements into objective, & giving us answers that take us closer to the truth. Agreed that reality is distorted at times and numbers are at times incapable of delivering the truth, but we cannot know this unless we have tried.

I also think that metrics (or numbers or measurements) enhance creativity in marketing, rather than stifle it. I believe that the ability to quantify not only gives answers, but also improves the robustness of our endeavors. Thinking specifically about what the author is referring to (i.e. advertising campaigns), tracking or measuring outcomes takes us closer to understanding what the trigger points are in order to drive objectives, and creativity can thus be a more 'targeted' exercise. Not sure if there is such a concept, but what I mean by this is 'productive' creativity.

Also, with regards to testing new advertising concepts, I believe that metrics aid creativity. An idea is only as creative as the tangible impact it has on the market, and metrics help in gauging this 'tangible' impact. Without measurements or metrics, who would decide what is considered as creative? Metrics give creativity the credibility it deserves.

Well there is always an alternative view to every argument and in this case, it is this: do not "over rely" on metrics to give you answers. I think this is where the problem lies - many a times, we tend to substitute judgement or intuition for hard numeric indicators. Hard numeric indicators are just one part of the entire decision making puzzle. Sometimes, in order to be successful, we need to do the contrary to what the numbers say. Sometimes we need to do the subjective rather than the objective, do the irrational rather than the rational. 

Numbers stifle creativity ONLY if we let them.

Friday, March 5, 2010

The 'ROI' of ROI !



Ever wondered what the ROI of ROI is? What I mean by this is the net value derived from initiatives that are aimed at measuring ROI. Is it even worth measuring the ROI of every marketing activity?

Not really.

There are three key questions to ask before embarking on a marketing ROI measurement program:

1. Why is this information needed? What decisions rely on this information?
2. What is the potential value if these decisions are right? What is the potential risk if the decisions are wrong?
3. What is the degree of accuracy needed (or margin of error tolerated) from measurement?

If measuring the return from marketing activities costs more than the net value derived from doing so, then measurement is a failed exercise. If rough indicators or hypotheses will do the trick, then why waste time, money, effort in chasing 100% accuracy?

Always remember that "not everything that can be counted, counts" ! If we live by this principle, we can probably make a more critical and informed choice of which activities to formally measure, and which ones to crack open through intuition and judgement.