Getting back to blogging sure feels great!
I wonder to myself, if we as research consultants should have all the answers? For example, what if the client insists on using the data to answer questions that the research is incapable of answering? Do we keep looking (knowing that we are now moving into a zone where our approach is not scientific) or should we turn back and say that the design is incapable of answering the question?
I think this is a dilemma that everyone in this industry will have faced (or will face) some time in their career. I think it all begins with managing expectations. Client expectations.
We need to be able to articulate the outcomes of our work better - what decisions will the client truly be able to make? Clients have the tendancy to squeeze costs, and we as researchers have the tendancy to squeeze our design to meet those costs. Naturally, this comes with limitations in terms of what we can answer.
Imagine doctors. Do they claim to fix a broken leg by giving pain killers? No. I think we need to approach our work the same way.
Friday, March 25, 2011
Saturday, November 27, 2010
Pay for performance, but whose?
One of the key talking points in marketing research circles recently has been the notion of 'pay for performance'. Essentially, this means compensating marketing research consultants based on the 'quality' of their services, and/or value to the client's business. But in my opinion, this notion throws up the age-old question: how does one determine the performance of marketing research and insight?
It is true that advertising agencies have been quick to adopt this practice. Well, I believe this has been driven partially due to the transparency in what constitutes a good advertising campaign. The client and agency can generally agree on the parameters and benchmarks for campaign success such as awareness, cut through, memorability, brand associations or even ROI. This can then be measured either through market-based validations (in the case of sales) or even through post-campaign consumer research. Once the parameters have been determined and their assessment is agreed upon, the performance of a campaign no longer lives in a black box!
However, let's now apply the same principle to marketing research consulting. I am sure we all agree that 'performance' is determined by 'outcomes'. Well, the outcomes of a market research engagement are judged on the quality of the insights delivered and the actionability of the recommendations for the client. Interestingly, the relevance and actionability of these insights is highly subjective. What agencies think is high quality work may be far from great in the eyes of the client. But who really decides this?
The true test of performance is in the value of the insights to the client's business i.e did the research help the business make crucial decisions that resulted in financial value? In all honesty, we don't even get to go that far - sometimes research is not even acted upon, in which case it is hard to truly understand the value of the piece of research work. The more we try to gauge the performance of a research engagement, the more we realise that the outcome lies in the hands of the client.
In my opinion, the performance of research lies more in the hands of the client, than the agency! Think of the below scenarios:
What the client does with the insight goes a long way in determining its perceived performance. A brilliant piece of insight may not even leave the boardroom of the client, in which case did the research engagement fail to deliver? Or a poor piece of research was executed brilliantly by the client - in this case, should the agency get the credit?
Based on the above, I am not really convinced that 'true' performance-based payment can be implemented yet. It's probably a great notion to work towards, however, I think it is fundamentally flawed as an approach, due to the number of extraneous factors involved!
It is true that advertising agencies have been quick to adopt this practice. Well, I believe this has been driven partially due to the transparency in what constitutes a good advertising campaign. The client and agency can generally agree on the parameters and benchmarks for campaign success such as awareness, cut through, memorability, brand associations or even ROI. This can then be measured either through market-based validations (in the case of sales) or even through post-campaign consumer research. Once the parameters have been determined and their assessment is agreed upon, the performance of a campaign no longer lives in a black box!
However, let's now apply the same principle to marketing research consulting. I am sure we all agree that 'performance' is determined by 'outcomes'. Well, the outcomes of a market research engagement are judged on the quality of the insights delivered and the actionability of the recommendations for the client. Interestingly, the relevance and actionability of these insights is highly subjective. What agencies think is high quality work may be far from great in the eyes of the client. But who really decides this?
The true test of performance is in the value of the insights to the client's business i.e did the research help the business make crucial decisions that resulted in financial value? In all honesty, we don't even get to go that far - sometimes research is not even acted upon, in which case it is hard to truly understand the value of the piece of research work. The more we try to gauge the performance of a research engagement, the more we realise that the outcome lies in the hands of the client.
In my opinion, the performance of research lies more in the hands of the client, than the agency! Think of the below scenarios:
What the client does with the insight goes a long way in determining its perceived performance. A brilliant piece of insight may not even leave the boardroom of the client, in which case did the research engagement fail to deliver? Or a poor piece of research was executed brilliantly by the client - in this case, should the agency get the credit?
Based on the above, I am not really convinced that 'true' performance-based payment can be implemented yet. It's probably a great notion to work towards, however, I think it is fundamentally flawed as an approach, due to the number of extraneous factors involved!
Wednesday, August 25, 2010
Process and Creativity: Have you found your balance?
Finding the optimal balance between creativity and process is perhaps the biggest challenge in everything we do.
I often find myself asking these fundamental questions: How do we know what extent of process is necessary? When should we break-free from the process, and start being creative? And, does process kill creativity?
As unscientific as this may be, I think that the relationship between creativity and process probably looks like a classic bell curve:
Process boosts creativity. In the absence of any process, creativity suffers. One could argue that processes provide the fuel for new ways of thinking to emerge. I also think that process makes creativity 'focused' (i.e. productive creativity).
But I think too much process acts as a deterrent to new ideas. The human mind has the tendency to switch-off and be influenced by the inertia of process. It is at this stage that we stop challenging the status quo and see new thinking die.
Ideally, the objective is to enhance creativity in everything we do. You want to be able to take a step back at all times and really question where you sit on the curve. You don't want to be hanging lose and shooting in the dark, nor do you want to be constrained by shackles of process.
If you learn how to find this optimal balance, I would be keen to hear from you.
I often find myself asking these fundamental questions: How do we know what extent of process is necessary? When should we break-free from the process, and start being creative? And, does process kill creativity?
As unscientific as this may be, I think that the relationship between creativity and process probably looks like a classic bell curve:
Process boosts creativity. In the absence of any process, creativity suffers. One could argue that processes provide the fuel for new ways of thinking to emerge. I also think that process makes creativity 'focused' (i.e. productive creativity).
But I think too much process acts as a deterrent to new ideas. The human mind has the tendency to switch-off and be influenced by the inertia of process. It is at this stage that we stop challenging the status quo and see new thinking die.
Ideally, the objective is to enhance creativity in everything we do. You want to be able to take a step back at all times and really question where you sit on the curve. You don't want to be hanging lose and shooting in the dark, nor do you want to be constrained by shackles of process.
If you learn how to find this optimal balance, I would be keen to hear from you.
Saturday, August 21, 2010
Innovation or lack thereof?
I recently read an interesting point of view on the Research.Opinionated.Insightful blog of the Research Magazine. The author makes an interesting point: it's not innovation that the marketing research industry lacks, it's implementation.
Every now and then, you will hear the marketing research industry being criticised for not being bold enough or for lacking innovation. While this is probably true in some cases, I actually hold an alternative point of view. Yes, traditional data collection techniques will become obsolete soon, and some agencies still function as if they are still in the dark ages. But, I think that relative to other consulting professions, the MR industry has embraced innovation and change.
Yes, there is still a lack of widespread diffusion of innovation within the industry. However, a new wave of innovation is emerging in the research consulting industry. New digital research techniques, neuroscience and the 'groundswell' (the power and influence of social media) are changing the face of the research consulting profession. While some pockets of the industry have been quick to embrace change and adopt these advances in how they address client problems, I have no doubt the others will follow suit sooner rather than later.
Moreover, innovation doesn't always have to relate to technology; it could also relate to thought-leadership. Some great thinking exists within the marketing research industry. An example of this is the Brand Value Creator (BVC) methodology by Synovate, a research technique that links brand outcomes to market share outcomes. Even though I work for this company, I can safely say that this methodology is truly thought-leading. And there are several other such examples in the industry.
However, if you still think the MR industry is too conservative, I think the part of the blame rests with the clients. Too often, clients do not want to challenge the status quo in how they approach problems, and this obviously affects how agencies approach research programs.
Finally, take a step back and think about this: How much has the management consulting industry changed over the years? I'm not sure if there has been any radical change in how they work or approach problems. Most of the classic strategic concepts and theories developed by the management consulting industry are decades old and are now probably outdated. When I was in University, I kept hearing about the BCG Matrix (by the Boston Consulting Group) and the McKinsey Grid, but I often wonder, hasn't the management consulting industry come up with any new groundbreaking tools since then?
Friday, July 9, 2010
What defines leadership?
Finally, after a month-long writer's block and an incredibly busy period at work, I'm back to blogging again. This post is going to be less about marketing, and more about management. I have been thinking about leadership lately, and what makes for great leadership. In trying to understand this, I have been asking myself one question: what is the true test of leadership?
Leadership, obviously, can be viewed from many different perspectives. Different people view leadership differently. For some, leadership is about leading people and making things happen. While for others, leadership is about being followed and respected. Both these definitions are incredibly valuable.
As I said, different people view leadership differently. This is probably a function of one's own life experiences and own inherent leadership traits. Having said this, I view leadership differently too.
It is about making others think bigger than they thought they could, and in doing so, making them run faster than they ever could.
Leadership, obviously, can be viewed from many different perspectives. Different people view leadership differently. For some, leadership is about leading people and making things happen. While for others, leadership is about being followed and respected. Both these definitions are incredibly valuable.
As I said, different people view leadership differently. This is probably a function of one's own life experiences and own inherent leadership traits. Having said this, I view leadership differently too.
For me, leadership is about inspiration. A true leader must be able to inspire the people to do their best, and in doing so, unleash their true potential. Under true leadership, the creative potential and cumulative performance of the entire organisation grows. In my mind, this is, by far, the most difficult challenge of a leader.To inspire others is hard. You can only inspire others, if you're inspired yourself. You can only make others passionate if you're passionate yourself. True leadership is about great emotional intelligence; it is about seeing and feeling what others can't see.
It is about making others think bigger than they thought they could, and in doing so, making them run faster than they ever could.
Sunday, June 6, 2010
What is this thing called 'insight'?
As marketing researchers, we live and breathe the notion of 'insight'. Identifying breakthrough insight is probably the dream of every marketing research consultant on every client engagement or research project. But I often ask myself, what is this thing called 'insight'?
Great marketing research is an art, the word 'insight' probably means different things to different people. Some may see it as a new way of looking at a business problem, some may think of it as identifying a 'eureka' finding that was previously hidden or wrapped up within the numbers.
But, too often, these 'eureka' moments doesn't translate into action. Too often, brilliant findings don't hit the mark, because they are potentially not actionable (or the client doesn't know how to translate them into action). So the way I look at it, there is one (and only one) litmus test to differentiate 'true' insight from the others.
It is this:
Ultimately, the role of marketing research consulting is to solve business problems. Business problems are only solved if research findings are translated into action. So any eureka piece of finding is potentially not 'insight' at all, if it doesn't leave the boardroom of the client.
Great marketing research is an art, the word 'insight' probably means different things to different people. Some may see it as a new way of looking at a business problem, some may think of it as identifying a 'eureka' finding that was previously hidden or wrapped up within the numbers.
But, too often, these 'eureka' moments doesn't translate into action. Too often, brilliant findings don't hit the mark, because they are potentially not actionable (or the client doesn't know how to translate them into action). So the way I look at it, there is one (and only one) litmus test to differentiate 'true' insight from the others.
It is this:
True insight makes action planning superfluous. That is, true insight comes with an instinctive recognition that the research problem has been nailed. Not only does it inspire the research consultant, but also the entire client organisation. The action plan to translate this insight into reality becomes overtly obvious. True insight has the potential to not only sell itself and but also action itself in the client organisation, on its own.
Ultimately, the role of marketing research consulting is to solve business problems. Business problems are only solved if research findings are translated into action. So any eureka piece of finding is potentially not 'insight' at all, if it doesn't leave the boardroom of the client.
Saturday, May 29, 2010
Re-thinking customer satisfaction research
I am a strong believer in the notion that the ultimate goal of any research program is to drive business results. Customer satisfaction research is no different. I came across an interesting perspective on customer satisfaction research in Research Magazine and it got me thinking.
I think, many a times, customer satisfaction is used as a company-wide KPI without truly understanding how it drives business results, i.e. how customer satisfaction links with key business outcomes such as sales, market share, profit or even share price. I think understanding this causality with business measures is the first step in designing an effective program. I think it is crucial for any business to understand the impact that relative levels of customer satisfaction have on the business. For eg. on a 1-5 scale of customer satisfaction, does it make more business sense to convert 'satisfied' customers to 'very satisfied' OR rather to convert 'dissatisfied' customers to 'satisfied'?
One way of identifying the key levers is by understanding the nature of the relationship customer satisfaction has with business measures. This relationship could be one of the following:
Scenario 1 is a classic example of a case where the true focus of research must be on 'dissatisfaction' rather than 'satisfaction', given that higher gains may not be achieved by delivering superior customer experience levels. In such cases, the research must be designed and tailored to focus specifically on the dissatisfied segment, and how to improve their experiences.
Scenario 2 is an example of a business where satisfaction or customer experience is a critical part of the overall offering (for eg. the airline industry). Here, the research program must be more holistic; and designed to not only address poor experience levels but also build and drive superior experiences.
Scenario 3 is an example of a business where customer experience is not part of the overall promise/ offering but where superior customer experience can be leveraged to build competitive advantage. Here, the focus of the business and research programs must be on converting the 'satisfied' into 'very satisfied', and even further into promoters or advocates.
One still needs to track customer satisfaction with all customers as a high-level KPI - however, prioritising further research based on the above will not only help in getting the biggest bang for the buck from these programs, but also help in operationalising the results internally.
I think, many a times, customer satisfaction is used as a company-wide KPI without truly understanding how it drives business results, i.e. how customer satisfaction links with key business outcomes such as sales, market share, profit or even share price. I think understanding this causality with business measures is the first step in designing an effective program. I think it is crucial for any business to understand the impact that relative levels of customer satisfaction have on the business. For eg. on a 1-5 scale of customer satisfaction, does it make more business sense to convert 'satisfied' customers to 'very satisfied' OR rather to convert 'dissatisfied' customers to 'satisfied'?
One way of identifying the key levers is by understanding the nature of the relationship customer satisfaction has with business measures. This relationship could be one of the following:
Scenario 1 is a classic example of a case where the true focus of research must be on 'dissatisfaction' rather than 'satisfaction', given that higher gains may not be achieved by delivering superior customer experience levels. In such cases, the research must be designed and tailored to focus specifically on the dissatisfied segment, and how to improve their experiences.
Scenario 2 is an example of a business where satisfaction or customer experience is a critical part of the overall offering (for eg. the airline industry). Here, the research program must be more holistic; and designed to not only address poor experience levels but also build and drive superior experiences.
Scenario 3 is an example of a business where customer experience is not part of the overall promise/ offering but where superior customer experience can be leveraged to build competitive advantage. Here, the focus of the business and research programs must be on converting the 'satisfied' into 'very satisfied', and even further into promoters or advocates.
One still needs to track customer satisfaction with all customers as a high-level KPI - however, prioritising further research based on the above will not only help in getting the biggest bang for the buck from these programs, but also help in operationalising the results internally.
Tuesday, May 11, 2010
Is your social media investment worth it?
I recently saw an interesting presentation on the basics of social media by Olivier Blanchard. Personally, I think the framework and approach is really valuable in order to understand the basics of ROI on any marketing program in general.
Quite simply, it is the distinction between non-financial and financial ROI. Even though your social media campaign achieves high click-throughs, visits, online conversations, this does not necessarily imply conversion into sales. This is the same principle as advertising - I mentioned in one of my earlier posts that awareness means nothing if it doesn't convert consumers through the funnel (both in the short and long term). This throws up the question of whether what we should really be measuring is the salience and relevance of the advertising?
Blanchard's presentation is great to have a quick browse through. It tries to make the point that establishing a base line measure of sales/ profits is a great starting point for any ROI measurement program i.e. comparing the baseline level of sales with the lift in sales achieved during the period when these social media activities were engaged in.
However, I think this might be too simplistic, as the lift in sales may not be attributable to the social media alone. Also, not everything is meant to impact in the short term. By definition, brand building is long term, and I think social media is a long term channel. Consumers engage with social media to have conversations and be heard, and not primarily to buy brands. Brand building is the result of these conversations and interactions.
Anyway, I will leave you to enjoy this presentation for now.
Quite simply, it is the distinction between non-financial and financial ROI. Even though your social media campaign achieves high click-throughs, visits, online conversations, this does not necessarily imply conversion into sales. This is the same principle as advertising - I mentioned in one of my earlier posts that awareness means nothing if it doesn't convert consumers through the funnel (both in the short and long term). This throws up the question of whether what we should really be measuring is the salience and relevance of the advertising?
Blanchard's presentation is great to have a quick browse through. It tries to make the point that establishing a base line measure of sales/ profits is a great starting point for any ROI measurement program i.e. comparing the baseline level of sales with the lift in sales achieved during the period when these social media activities were engaged in.
However, I think this might be too simplistic, as the lift in sales may not be attributable to the social media alone. Also, not everything is meant to impact in the short term. By definition, brand building is long term, and I think social media is a long term channel. Consumers engage with social media to have conversations and be heard, and not primarily to buy brands. Brand building is the result of these conversations and interactions.
Anyway, I will leave you to enjoy this presentation for now.
Tuesday, April 27, 2010
Marketing: The undervalued or the engine room?
One of the areas in marketing I am immensely passionate about is demonstrating the strategic value-creation philosophy of marketing. By definition, this implies that marketing is more than just sales or advertising, and definitely more than just a P&L expense. Marketing's standing in a firm goes beyond being merely a business function - it is the driver of the long term sustainable and profitable growth for companies.
Given that marketing must be responsible for long-term growth and profitability, its domain, by definition, can't be restricted to advertising or sales. On the contrary, I look at it as an all pervasive discipline spanning all functions in the company. However, in reality, there is a high variance in how companies define and treat the marketing function.
Through my personal observations, I have built up a typology of the different roles marketing plays in companies. Simply put, these can be classified as : the undervalued, the underdog and the engine room.
The first category is the undervalued. Here, marketing is viewed merely as a communications tool. For these companies, marketing is little more than advertising and promotions, with its main purpose being to influence consumer decision making. I have also observed that it is in these companies that there is the highest pressure to justify marketing ROI and the highest risk of marketing budget cuts, demonstrating the lack of belief and support for the function by senior leadership. Naturally, these companies also tend to be less sophisticated in their marketing organisation and practices.
Then the second category is of the underdog. Here, the marketing function is viewed as a challenger, with increasing emphasis placed on contemporary marketing philosophies and practices. Marketing in these companies encompasses consumer insights, product innovations, strategy and pricing, and is often leveraged to drive long term business growth. The marketing organisation tends to exhibit a high degree of sophistication in these companies. However, inspite of this, marketing lacks a seat on the table in these companies. Finance usually pips marketers in winning top leadership roles in these companies.
And last but not least - I like to refer to the third and most influential role of marketing as the engine room. Here, marketing is more than a business function - it is the single most important driver of market share, growth and profits; it is a mindset of winning in the marketplace through robust consumer-centric strategies. These companies define marketing broadly to include P&L delivery, and hence, marketers in these companies go on to key top management positions in the company. Given that marketing includes cross-functional business leadership, marketing in these companies tends to be the most sophisticated and analytically driven. These are the companies that acknowledge the value of brand building and consumer insights, and invest heavily in advertising and marketing research. It is the companies that treat marketing as the engine room that are the best training grounds for classical marketing fundamentals.
Ideally, you want to be pushing to the right-hand side of this continuum, if you are to truly drive long-term growth and value through marketing.
Wednesday, April 14, 2010
Marketing's deadliest sin : Measuring the tip of the iceberg in advertising
One of the biggest sins a marketer can commit is scratching the 'tip of the iceberg' in advertising. It surprises me how often clients gauge the impact of their campaigns through superficial metrics such as awareness and message take-out.
What goes on 'between' message take-out and the actual sale is referred to as the 'black box' in marketing. It is every marketer's fundamental responsibility to see through (or atleast try to see through) this black box. Advertising is ultimately intended to have the following two impacts: 1. short term sales impact 2. equity impact which then leads to sales in the long term. So if you are not measuring these when trying to understand the impact of your campaigns, then what exactly are you measuring?
Don't get me wrong. Awareness and message take-out are crucial in assessing the first level of impact of the advertising, but these are just the tip of the iceberg, in terms of the actual impact of advertising. One campaign may have an awareness of 90% and the other 60%, but would this mean the latter is less effective? Even though the latter reached less people, it could have had a stronger impact on equity and subsequently on sales.
What goes on 'between' message take-out and the actual sale is referred to as the 'black box' in marketing. It is every marketer's fundamental responsibility to see through (or atleast try to see through) this black box. Advertising is ultimately intended to have the following two impacts: 1. short term sales impact 2. equity impact which then leads to sales in the long term. So if you are not measuring these when trying to understand the impact of your campaigns, then what exactly are you measuring?
I may remember your ad, may love it and could talk all day long about it. But this still does not mean that the ad strengthened the power of your brand in my mind and if I will buy it. Awareness and message take-out will tell you if your brand reached people, but not if people reached for your brand!
Subscribe to:
Posts (Atom)