Saturday, March 20, 2010

Making the transition from insights to value creation: How do you stack up?

One of the biggest challenges (or opportunities) facing the marketing research industry today is the blurring of boundaries between research and consulting. Increasingly, clients are expecting research firms to not only provide insights, but also act as consultants in overall business and marketing strategy planning. I believe that firms best placed to thrive in the next decade are the ones that adopt the 'consulting' mindset, rather than the pure 'information' mindset.

Even though most researchers like to think that they are consultants (by default), this is not the case in most instances.  Adopting a consultant's mindset requires a significant face-shift not only in terms of skills, but also in terms of approach and the overall delivery model. 

Quite simply put, I differentiate researchers and consultants based on one fundamental criterion: a researcher works with data, while a consultant works with data gaps. This essentially means that a 'consultant' doesn't rely on the primary data to solve the client's business problem. Solving a business problem goes beyond merely using primary data - it is a combination of data (primary and secondary), expertise, context, judgement and intuition. 

I have a quick audit to determine where one lies on the strategic value creation continuum.

1. Do you know the key forces affecting the client's industry and business in general?
2. Do you know the client's strategic objectives for the next financial year? This not only includes marketing objectives, but also their advertising strategy, media, brand plans. 
3. What is your level of collaboration with the client throughout the research process? Do you attend the client's internal planning sessions?  Do you meet with the client at on-going intervals to discuss findings and brainstorm hypotheses/ possible actions? (rather than going away once the research was commissioned and coming back after 2 months with a magic solution!!).
4. Have you used alternative knowledge sources to address the problem? (i.e. secondary data, client's internal database information, industry context, expertise and judgement). 
5. Do you provide 'insights' versus just 'information'? More importantly, do you provide a strategic action plan for the client based on the findings?
6. Do you help the client operationalise and implement these strategic recommendations? Do you follow up and see if these strategic recommendations were implemented and if they had any value to the client?
7. Finally, and most importantly, would you still be able to solve the client's business problem if there were gaps in the primary data?

If we can answer "yes" to each of these questions, we can be sure to have made the transition from being a source of "information" to a source "strategic value" to our clients.
So, based on the above, where do you stack up?

Sunday, March 14, 2010

"Measuring" the value of insight: It can and must be done, but how? (Part I)

I had the privilege of attending the AMSRS State Conference last week and really enjoyed all the presentations and ideas put forth by the speakers. However, the one that resonated with me the most was the one by  Duncan Rintoul on "The real value of market research". This is a fascinating issue and whilst there have been considerable advancements in measuring the value of 'marketing', the same cannot be said for 'marketing research'. 

The concept of measurement is not a new one, and neither is the idea of measuring the value of marketing research. But the presentation essentially crystallized some of my own thoughts on this topic, and provided a framework for thinking critically about the 'net' value of the research we do for our clients, as marketing research consultants. I think Duncan's approach is an excellent one, and I must commend him for this; not to mention that the idea has actually sparked a lot of discussion both within the agency and the client-side on how this could be taken further and implemented. 

The reason why I have inverted commas around the word measuring in the title of this post is because I wanted to differentiate between the notion of 'measuring the value of insight' versus 'evaluating the effectiveness of marketing research'. A research program is deemed to be effective if it enabled the company to make key decisions, which then translated in the achievement of broader marketing and corporate objectives. Measuring the value of marketing research, on the other hand, is quite different, and potentially more complex. It is about ascertaining the net dollar impact that the research had on the bottom-line of the company.

I think Duncan's presentation provides a good framework for evaluating if the research has been effective, but the point of my post is around 'measuring' the dollar impact of the research on key financial metrics. So essentially, it is the same idea taken to the next stage in the measurement chain.

So why is measuring the net dollar impact of research necessary? Why isn't ascertaining whether the research has been effective or not, enough? Well, quite simply put, it is because as a client, you are always contemplating between potential research proposals and programs. To truly extract maximum value, you need to select the research program that will deliver maximum net returns to the company.

Imagine this scenario: You conducted two separate research programs on brand tracking. Both programs provided insights that enabled in making key brand decisions and hence, helped in achieving your marketing objectives. But, which was more valuable? If you had to choose one over the other, from a purely financial point of view, which one would you choose? I think this is essentially the premise of my argument. 

This is an issue which needs detailed explanation, and I am not going to give a magic answer to this, but I will end by presenting my framework:


I will be sharing my insights and ideas on how to interpret this framework and also on how to tackle the last level in the measurement chain i.e. financial impact, in my next post. So stay tuned for more, and let me know your thoughts and comments in the meanwhile.

Tuesday, March 9, 2010

The age old conundrum: "Measurement stifles creativity!"

I read an article in Ad Age recently ("Why metrics are killing creativity in advertising") which made the audacious claim that "when marketing decisions are based on numbers, we lose the desire to be creative".  Well, the classic battle between the left and right brain marketers continues!

Firstly, from the article it is evident that the author (advertising agency creative exec) is someone who disregards the notion of measurement (rational judgement), for the sake of emotion (which is not always rational). This poses the question: Why is measurement seen as alien to the creative process? 

Being a quantitative person myself (someone who likes objectivity), I must admit that I disagree with the assertions in the article. I do not think that numbers hamper creativity. I view numbers as doing one thing and one thing only: turning our subjective judgements into objective, & giving us answers that take us closer to the truth. Agreed that reality is distorted at times and numbers are at times incapable of delivering the truth, but we cannot know this unless we have tried.

I also think that metrics (or numbers or measurements) enhance creativity in marketing, rather than stifle it. I believe that the ability to quantify not only gives answers, but also improves the robustness of our endeavors. Thinking specifically about what the author is referring to (i.e. advertising campaigns), tracking or measuring outcomes takes us closer to understanding what the trigger points are in order to drive objectives, and creativity can thus be a more 'targeted' exercise. Not sure if there is such a concept, but what I mean by this is 'productive' creativity.

Also, with regards to testing new advertising concepts, I believe that metrics aid creativity. An idea is only as creative as the tangible impact it has on the market, and metrics help in gauging this 'tangible' impact. Without measurements or metrics, who would decide what is considered as creative? Metrics give creativity the credibility it deserves.

Well there is always an alternative view to every argument and in this case, it is this: do not "over rely" on metrics to give you answers. I think this is where the problem lies - many a times, we tend to substitute judgement or intuition for hard numeric indicators. Hard numeric indicators are just one part of the entire decision making puzzle. Sometimes, in order to be successful, we need to do the contrary to what the numbers say. Sometimes we need to do the subjective rather than the objective, do the irrational rather than the rational. 

Numbers stifle creativity ONLY if we let them.

Friday, March 5, 2010

The 'ROI' of ROI !



Ever wondered what the ROI of ROI is? What I mean by this is the net value derived from initiatives that are aimed at measuring ROI. Is it even worth measuring the ROI of every marketing activity?

Not really.

There are three key questions to ask before embarking on a marketing ROI measurement program:

1. Why is this information needed? What decisions rely on this information?
2. What is the potential value if these decisions are right? What is the potential risk if the decisions are wrong?
3. What is the degree of accuracy needed (or margin of error tolerated) from measurement?

If measuring the return from marketing activities costs more than the net value derived from doing so, then measurement is a failed exercise. If rough indicators or hypotheses will do the trick, then why waste time, money, effort in chasing 100% accuracy?

Always remember that "not everything that can be counted, counts" ! If we live by this principle, we can probably make a more critical and informed choice of which activities to formally measure, and which ones to crack open through intuition and judgement.

Monday, March 1, 2010

No news is bad news. Probably not!


One of the toughest aspects of being a marketing research consultant is to provide bad news to a client. The bad news could be in the form of declining scores of certain metrics or a discovery that one of their key strategies is 'ineffective'. Fellow consultants will probably agree that this is not the best situation to be in.

I was in a similar situation recently. One of my clients saw their satisfaction scores and other performance metrics decline significantly and to be very honest, this freaked the hell out of me at first.

If handled inappropriately, such situations can easily become a you vs. the client contest, one that you certainly don't want to be in. The key lies in taking the client on a journey with you - a journey of discovery and opportunity identification.

From my own experience of such a situation, I have learnt the following: 
  • First the foremost, keep the client in the loop. If you discover something potentially negative, it is probably a good idea to let them know and involve them in the process early. You don't want the news to be a 'shock' to them, which will ultimately pose questions to the validity and accuracy of your claims.
  • Leverage the client's insights for hypothesis building. The value of this cannot be  emphasised enough. Clients know their business better than you. In my case, I benefited immensely from bouncing ideas off the client and this eventually led to the identification of certain potential triggers of the problem.
  •  Know the data and context backwards. This is probably even more important when reporting something negative to the client. You need to build and demonstrate confidence, which comes from having an in-depth and expert understanding of the problem.
  • Try to find solutions through innovation. Situations like this demand more innovation and creativity than usual. Try to find potential triggers of the problem through unique approaches and analysis. Don't be afraid to experiment. In my case, I found value from using a new approach and combining it with interpretation, based on the context and existing hypotheses. Upon doing this, the root cause of the problem was clearly evident.
  • Know your communication strategy. By this, I mean your strategy or approach of communicating your findings to the client and their wider stakeholders. You can take one of two approaches : the subtle approach or the 'hard facts' approach. Having done both of these (in two different situations), I think this is purely a function of the nature of the client. If your client is extremely skeptical and aggressive, you probably want to take the subtle approach! (Caveat: Ensure that the essence of the problem is not lost due to the subtlety). Our job as consultants is to accurately reflect the depth and breadth of our findings. 
  • Finally, sell the problem as an opportunity. In every problem lies an opportunity. Make sure to communicate this to the client. Demonstrate how your identification of the problem could lead to better opportunities for improvement and re-assessment of some of the key strategies, which could potentially have tremendous long-term value for their business. In order to do this, we need to sell a solution, and not just report on the problem.
If we can follow the above, we can ensure that we not only solve our clients' toughest problems, but also ensure that they come back to us, when they have another one!