Skip to main content
Category

Customer surveys

measuring customer experience

The relative benefits of customer satisfaction surveys

By Customer surveys

The focus on customer service and satisfaction as the ticket to loyalty, growth and profits has been growing since the 1990s with numerous studies providing hard evidence of the connection between customer satisfaction scores (CSAT) and profitability.

But there have been some quite prominent dissenting voices. Research from MIT Sloan, concluded that although companies believe higher customer satisfaction scores equate to a bigger share of wallet, the ‘unfortunate reality’ is that the relationship between satisfaction and spending is very weak.

In the MIT Sloan Management Review article titled, The High Price of Customer Satisfaction, the authors illustrate the point: “A study of the banking industry by McKinsey found that, on average, only 5% of bank customers actually close their accounts each year and that the corresponding loss represents just 3% of total deposits. On the other hand, 35% of customers reduce their share of deposits, and that corresponding loss represents 24% of total bank deposits.”

How satisfaction is measured varies from one industry or business to another. For some, it may be based on retention or repeat customers, for others it’s a numerical score based on feedback. There is no ideal metric and it may be useful to combine the satisfaction survey with some other source of customer data. CSAT scores may not be the last word in profitability projections, but it’s worthwhile looking at some of the main benefits and limitations. 

The benefits

Check reality. Customer satisfaction is the result of delivering a product, service, and overall experience that meets customer requirements. A customer satisfaction survey tells you if your company is falling short, meeting or exceeding expectations. In other words, it helps close the gap between your perception and the reality of what your customers want. Which can be alarmingly wide.

In a recent survey by HubSpot, “80% of surveyed CEOs said they deliver an exceptional customer experience, but only 8% (!) of customers agreed.” Correcting that kind of disconnect is probably the best reason to get clarity on what’s really important to your customers. 

Walk your talk. It’s easy to talk about how customer-centric your company is. But if your customers aren’t feeling like the centre of your universe, the words ring hollow. Surveys are an opportunity for people to tell you exactly how well, or not, your product, price and service meet their expectations.  And bear in mind that you need to present your survey to them as a sincere invitation for feedback.

The fact is, people are bombarded with survey requests, so make sure you clearly communicate how their feedback will help improve their experience. Then, let them know they’ve been heard by taking action. Recall the old adage, ‘actions speak louder than words’? In an age where talk is cheap and exceptionally loud, actions still speak louder.

Stay relevant.  Customers’ requirements and expectations are constantly changing, and you need to keep pace to stay competitive.  Survey feedback is one important source for those insights and the market’s response to the current global pandemic is an extreme case in point.

As the South African economy slowly reopens from COVID-19 lockdown, restoring consumer optimism is a key priority for businesses. It will take time and deliberate attention to renew customer relationships and rebuild the trust and confidence that loyalty is built on.

Customer surveys provide the information and the insight you need to understand and respond to customer expectations and concerns in this environment of constant change and uncertainty. Asking the right questions, listening to feedback and responding accordingly is the only way to remake a customer experience that’s relevant, satisfying and sustainable.

The limitations

Clarify the meaning.  CSAT scores are not a reflection of the quality or value of a business and its products overall. They are the customer’s perception of the value that was expected compared to the value that was delivered. The scores only partially address the wider scope of factors that need to be considered, such as loyalty, customer profitability or reputation.

Be realistic about scores. Consider that your high scores are not a reflection of high customer satisfaction, but low expectations.  Your existing customers have come to expect a certain kind of experience in doing business with you, so are naturally quick to notice improvements or decline in any aspect of your offering.

Over time, they’ll adjust their expectations to the value being consistently delivered.  The result is that survey scores may suggest customers are ‘just satisfied’ when in fact, this ‘average’ rating is due to lowered expectations based on the new normal of their experience. This situation means that maintaining high satisfaction scores over an extended period costly and difficult.

Keep a holistic perspective. There is plenty of evidence to support the connection between customer satisfaction and profitability, but it’s a nuanced argument. A recent article by EY executive Ken Dickman questions at what point relentless customer focus leads to diminishing returns. 

An EY survey of high performing companies found a clear correlation between strong financial results and happy customers, reporting, “50% of survey respondents with increasing customer satisfaction report average annual revenue growth between 5% and 15% over the past three years.” However, high satisfaction scores are not the ticket to growth and Dickman advises that organisations need to consider “the value of a customer, the level of customer centricity that will unlock that value and the profitability of the value.”

Is now the time to measure customer satisfaction?

Considering the country is still in the early stages of reopening for business, it may seem premature, or even insensitive, to be querying customer satisfaction. But COVID-19 will likely be with us for a long time and require businesses to create new ways of operating under rules of social distancing, temporary closures and other risk mitigation practices. In these circumstances, delivering a satisfying customer experience involves meeting previously unimagined challenges to physical and emotional security.

There’s no question that your customers are looking at your business through very different lenses than they did even 6 months ago and customer satisfaction surveys can be a valuable channel for feedback and insights to inform the way forward. 

We can help you ask the right questions and interpret the data to find the most effective, economical solutions to customer satisfaction. 

Get in touch and let’s talk.

improve survey response

Do incentives improve survey response rates?

By Customer surveys, Employee surveys

Incentives are often the proposed solution for low survey response rates. After all, what’s the alternative when your ever-so-polite invitation to ‘help us serve you better’ is ignored?  

The fact is, people hate surveys. Even people who are big players in the survey business hate surveys. 

Why? Off the top, they’re often longer than promised, packed with curiously irrelevant questions, and just feel like more effort than they’re worth. Plus, they rarely produce any change that anyone sees or hears about.

Define response rate

So, considering the prevalence of bad attitude around surveys, an incentive of some kind would seem the thing to sweeten the deal and boost response rates. But hang on. Before you jump to an incentive as the solution to your response rate problem, you need to be clear on what ‘survey response rate’ actually means in your contextWhat are you trying to achieve?

  • Do you want to increase the number of people who respond to your survey invitation?
  • Do you want to increase the total number of responses in a particular area of your business?
  • Do you want to improve the representativeness of your survey responses?

Maybe offering incentives isn’t the answer you’re looking forIncreasing responses in a particular area of your business or improving the representativeness of responses suggests an adjustment to your sampling scheme, not your response rate.

 

See the value of quality survey information. Read How to improve the car buyer’s journey using customer feedback. 

The cost benefit vs the carrot   

If you’re looking at a pure and simple need to increase the number of people who respond to your survey invitation, offering incentives still may not be the answer. From a customer perspective, the decision to take a survey, or not, is a little more complex than weighing up your basic ‘do this and get that’ proposition. It’s a cost / benefit equation, and for your customers, there are numerous personal costs. For example:

  • Time. People are more time stressed now than ever before and are simultaneously encountering more requests for their feedback. Time is money and energy.
  • Effort. Long, complicated surveys actually take a lot of effort to complete.
  • Hassle / boredom. People are understandably irritated when what was billed in the email subject line as a ‘short’ survey turns into a very long survey. Particularly when the survey questions are repetitive.
  • Potential breech of privacy. Many people question promises of confidentiality.
  • Potential for spam and telemarketing. Many people worry their contact details will be sold (and resold over and over again) to unscrupulous marketers.
  • Survey vs sales pitch. With the increasingly common practice of ‘Selling under the guise of research’ people have grown very sceptical about the legitimacy of survey invitations.

There are many very sound reasons why people hesitate to volunteer for surveys. 

Reduce the cost, increase the benefit.   

Once upon a time a survey invitation was flattering – an opportunity to speak and be heard as an individual. Today, digital forums of public opinion are an endless free-for-all and everyone is invited.

In comparison, it takes a very attractive incentive to make a survey worth the time and effort.

But attractive doesn’t necessarily mean monetary. Sometimes just reducing the cost factor in the customer cost benefit equation is attractive enough. For example:

  • Make the survey short, easy and even interactive. In fact, make your survey positively compelling with gamification.
  • Include open-ended questions that invite personal stories and use text analytics to glean insights.
  • Make sure questions are relevant to your stated interest.
  • Be transparent about how personal information will be used and protected.

And then, of course, offer benefits to participation:

  • Demonstrate changes to your way of doing business that are, in fact, a direct response to customer feedback.
  • Respond promptly to any requests for personal follow-up on particular issues.
  • Send a follow-up note of thanks. It’s basic good manners and shows respect for their time and effort.

If, how and when to offer monetary incentives. 

While some form of monetary incentive will surely get you more responses, do you want higher numbers or better-quality feedback? There are many very pragmatic arguments for avoiding monetary incentives (either cash or cash equivalent in the form of vouchers, gift cards or products). Bias is the most obvious.

People who have been paid to complete your survey tend to revert to a personal agenda, described by Dana Severson, writing in Inc.com, as “Lack of consideration: When a customer has a financial motivation, their objective changes from providing meaningful feedback to providing quick answers, especially when it’s a long survey.” Or there’s the threat of Guilt bias, “While the lack of thoughtfulness is one concern, customers who are incentivised also have a tendency to provide more favourable answers due to the perception of a quid pro quo.” Be advised.

But if you’ve considered all the angles and believe monetary incentives will help drive survey success for you, here are a few tips for best results.

  • Be conservative. All incentives carry the risk of creating bias, but larger incentives come with a greater risk that respondents are in it for the money and not really interested in giving helpful or sincere feedback. So basically, you’re paying for bad information. Better to offer a small ‘token of appreciation for your participation’ – big enough to be meaningful, small enough to be a true ‘token’.
  • Incentivise everyone up front. Offer a small incentive to everyone you’re sampling, rather than a bigger incentive to those who complete and submit the survey. People will believe what they see before they’ll believe what you say you’re going to send them.
  • Offer incentives of equal value. Be sure to offer incentives that will appeal to everyone. Offering a discount voucher for future purchases will only interest those who intend to return and do business with you, excluding those who (for whatever reason) won’t. This is another way bias can creep into your survey results.

Consider your objectives and options 

Without a doubt, incentives do increase survey response rates, but may not be worth the risk of bias in its many guises. And you can’t leap to any solution until you’ve analysed your problem, which could well be a weakness in your sampling scheme.

There’s no simple answer to the question of whether incentives improve survey response rates. But get in touch and let’s talk about your situation. We’ll help you define your needs and develop a survey that generates healthy response rates and delivers the quality feedback you need to make meaningful connections with your customers.  

survey analysis

6 Steps of strategic survey analysis

By Customer surveys

One of the biggest challenges facing business today is how to efficiently and productively glean meaningful insights from the overwhelming quantity of data generated in the normal course of the day. From customer purchasing habits (loyalty program data) to contact centre performance stats (gamification data) and formal survey data, there are multiple sources for internal and external to guide decision-making.

Organisational surveys are commonly used to identify and prioritise areas for business improvement. The traditional variety are an effective tool for collecting data, but tend to be respondent-focused without linking responses back to explicit business objectives. That’s where strategic surveys excel.

Strategic surveys intrinsically link content with organisational goals and objectives. The results provide a systemic view of what’s working in the business and which areas need improvement.

Strategic survey analysis in 6 steps

  1. Analyse strengths and weaknesses.
    The process begins with a look at the intra-survey strengths and weaknesses, i.e. how the items in the survey compare to each other, and the extra-survey strengths and weaknesses, i.e., how the results compare to other results from similar surveys. This is done at both the individual question level, as well as the category level.
  2. Look for patterns in the data.
    Typically, common themes will emerge. For example, when teamwork scores are low, it’s common to find problems in inter-departmental communication. These patterns give clues to the underlying circumstances of the weakness.
  3. Conduct statistical analysis.
    Often organisations need to quickly identify those areas that are most important to improve. Key Driver Analysis provides a way of selecting which areas to focus on, by calculating each area’s leverage on the ‘bottom line’ measure – overall satisfaction. The high-priority targets identified by quadrant analysis are those areas which meet two criteria: firstly they need improvement and secondly their improvement will strongly leverage overall satisfaction. The statistical relationship between each attribute is measured, as is overall satisfaction. Items with high leverage (correlation to overall satisfaction) will have more impact on satisfaction than items with low leverage. By plotting the leverage (correlation) scores and the performance scores (the percentages or average scores for each attribute), it becomes apparent which items need priority attention.
  4. Conduct comment analysis.
    Simply reading open text comments in the survey can give one a flavour for the types of issues on respondents’ minds. However, proper interpretation can be difficult if there are many individual comments. Keyword searches are used to partly overcome this.
  5. Make demographic comparisons.
    In some cases, most or all demographic subgroups (e.g., regions, departments, etc.) will feel the same about a matter. In others, there will be large differences in how these subgroups feel. There can be instances where a large subgroup is almost completely responsible for a low score for an entire organisation. Without an analysis of sub-groups, corrective action may not be as effective, since targeted action will be necessary in certain cases.
  6. Produce summary findings.
    In this stage, the consulting team will summarise findings. The end result is recommendations of key items and areas to target for improvement on an overall organisation level and at the sub-group level. Further discussions before and after this phase may be necessary.

Systemic view from multiple perspectives

Employees, distributors, dealers, franchisees and customers have unique vantage points for providing intelligence on every aspect of your business. From the effectiveness of processes and current structures, to the quality of teamwork across functional units and barriers to innovation, they can tell you the why behind the what of your survey data. So why not ask them? A strategic survey is designed to do just that, and deliver targeted results for quick management response.

Contact us

survey mistakes

Survey design mistakes to avoid

By Customer surveys

The recent proliferation of digital survey tools has made the art of collecting feedback an easy, economical exercise. Frequent, finely targeted surveys are well suited to current agile management practices in that they provide a quick snapshot of perceptions and experiences of people on the front lines of business, and allow quick responses. That’s the theory. Reality doesn’t always play out that way.

The rise of survey democracy

The downside to a ‘surveys for all’ democracy is the vast scope for misuse by well-intentioned survey design amateurs. That is, poorly designed surveys producing questionable data, misinterpreted as statistical truth, and presented to unwitting executives as evidence for data-driven decisions. See how wrong it can go?

That brief scenario is reason enough to work with survey professionals.

And survey fatigue is real. At a point, people just get tired of taking surveys. Don’t forget, they’re employees (engagement surveys, pulse surveys, culture surveys to name a few), customers (satisfaction surveys), citizens (market research surveys) and online media users (web surveys) and there’s a survey for every persona.

Common survey mistakes you don’t have to make

Whether you’re one of those well-intentioned survey design amateurs mentioned earlier, or a survey professional who isn’t above friendly reminders, here are some common survey design mistakes to avoid.

  • Blowing the cover of confidentiality
    If your survey is asking for information of a confidential nature, such as rating management performance or personal perceptions of corporate culture, make sure you live up to your promise of confidentiality. Examine your survey process for loopholes. For example, your results may be anonymous, but anyone with access to filtering the data by demographics could easily identify the one and only finance executive at your branch in Pinetown, Natal. It’s happened.
  • Survey feedback that goes nowhere
    People quickly tire of taking surveys when there’s no demonstrable result for the time and effort they put into answering the questions. Share feedback immediately and then follow up with communication about how and when issues will be addressed.
  • Confusing, compound questions
    Each question should have a single subject. For example, asking ‘‘Do you feel that your manager is fair and provides the support you need to do your job?’ is confusing. Are you asking about fairness or support? Those are two different questions.
  • Inappropriate language and style
    Make sure your survey language meets the level of understanding of your audience. Question wording should be short and simple for the sake of clarity. But also remember to avoid using jargon, acronyms or overly formal language.
  • One way communication
    Remember that a survey is more than simply feedback. It’s an opportunity to open a dialogue and create a platform for collaboration. Encourage people to participate on the basis that they’re part of a collective and what they say is meaningful and will be considered and applied to make improvements.
  • Wrong survey medium for the audience
    Technology has given rise to multiple delivery options, so be sure you pick the one your audience will be most likely to enjoy and be able to engage with. Millennials, for instance, want to work with responsive mobile.
  • Delayed response and follow-up
    Somewhere in the world, patience is still a virtue. Not in the world of surveys. The days of waiting (and waiting, and waiting) for analysts to crunch, report and interpret the data are very long gone. People in general, and executives in particular, expect instant results. The object of the undertaking, after all, is to get instant feedback and fast survey results for action planning. Be prepared to deliver.
  • Missing an engagement opportunity
    Surveys are more than a static exercise to gather feedback. They’re an opportunity to actively listen to your people. Listen as a human being, not as an executive, and hear the humanity of the people responding. It’s a simple and powerful way to improve engagement.
  • Asking for flattery rather than feedback
    Your survey questions should emphase pain points in your organisation, rather than offer opportunity for flattery. It’s all very well knowing whether or not your CEO is perceived to have excellent leadership qualities, but what you really want to know is why your product delivery is chronically delayed. That’s the problem you need to root out and there are specific questions you need to ask.

Still want to go it alone?

Those are some of the major hurdles standing between you and quality survey data that you can use for meaningful management decision making.

We’re biased, of course, but it’s worth consulting a survey professional to help you do the right survey research, ask the right type of questions and create a survey that’s much more than a vehicle for data collection.

Contact us.

 

 

survey

8 Tips for effective employee surveys

By Customer surveys

Well-structured employee surveys can yield a wealth of management information you can use to create a more positive, productive workplace.

People want to be heard and valued. Conducting regular employee surveys sends a message that their perceptions matter and responsiveness is proof of management’s commitment to upholding a fair two-way relationship with staff. The result is a give-and-take that builds trust, supports higher levels of retention and engagement, and improves the overall strength of the organisation.

The type of employee survey you need depends on your objectives. Do you want to measure employee engagement or check alignment to strategy? Do you want to find out how employees feel about organisational culture or whether the organisation is fulfilling employees’ expectations?

Here are the basic criteria for constructing sound employee survey items that deliver actionable results.

 

1. Define your objectives

Begin by defining exactly why you’re conducting the survey. What are you trying to accomplish, and importantly, what will you do with the results? It’s critical that the survey content is aligned with objectives and that the results help boost organisational effectiveness and improve employee engagement and commitment. In other words, you’re looking for more than ‘nice to know information’. Survey results should be actionable and enable organisational improvement.

 

2. Make questions easy to understand

Make your employee survey questions clear and easy to understand by using plain language that everyone will generally comprehend and interpret similarly. Also, use short sentences that speak directly to the point of the question. This will keep the readability scores high and difficulty levels low.

 

3. Create content that’s applicable to everyone

As a general rule, make sure your survey content is rateable by all employees, not just those in a specific area of the business or at a particular job grade.  Asking for ratings on issues outside of an individual’s experience creates confusion and likely cynicism about the purpose of the survey.

 

4. Keep questions focused on just one issue

Each item in the survey should deal with just one topic – not two or more. Statistical results for an item that addresses more than one topic will be ambiguous. For example, ‘My supervisor sets clear goals and provides positive reinforcement when employees meet the goals.” What do you want to rate, the goal setting or the positive reinforcement?

 

5. Phrase items to generate disperse responses

Ideally, there is variance in how employees respond to an item. When all employees respond at either end of the scale, the results don’t reveal much about their perceptions or the differences among groups.

 

6. Word items in the positive

All survey items should be worded in the positive so that the same end of the response scale always indicates favourable ratings. There is a common myth that mixing in some negatively phrased items will increase accuracy. In fact, this only confuses both the employees taking the survey and the people interpreting the results.

 

7. Write items to fit with the response scale

There is no one ideal type of employee survey response scale, so make sure that all items are written so the chosen scale is applicable and appropriate. For example, eValue uses a Likert scale with responses written in the familiar either / or format (agree / disagree, important / not important).

 

8. Be sure not to confound items and response scales

Avoid confounding variables that may lead to inaccurate results. For example, with a frequency response scale (‘always to never’) an item such as, “My supervisor always sets clear goals”, would be confounded.

 

Repeat the process

Employee surveys are most useful when they’re part of an ongoing measurement system, rather than just one-time snapshots of employee satisfaction or engagement. Trend results are the most powerful way of determining what’s improving and which areas of the business require further attention or different interventions, assuming that some form of remedial action has been taken.

As for frequency, some organisations conduct annual employee surveys, with pulse surveys conducted throughout the year. Some only survey employees every 18 – 24 months. Ultimately, the frequency depends on the type of survey you’re conducting, what you’re trying to achieve and your readiness to act on feedback. Remember, the emphasis is on action and using survey results to improve the employee experience and business performance.

 

 

 

customer survey

How you should be measuring customer experience

By Customer surveys

As every business knows – delivering first-rate experiences to customers is a strategic priority. But many companies still fail to accurately quantify the complete customer experience (CX). Without a layered approach to CX measurement, you’re missing the first crucial steps in getting insight into customer satisfaction and it’s unlikely you’ll be able to predict customer loyalty with any accuracy. Fail to do this, and you probably won’t survive the decade in a world that’s finely balanced on the fulcrum of customer-centricity.

The case for CX measurement

Customer experience is simple enough – your company’s interaction with your customers. From their perspective – you will be judged on how well you deliver to their wants and needs. But that’s where simplicity ends. Leveraging the power of CX to influence consumer spending and inspire loyalty to your company, begins with an understanding of the complexity of the customer experience.

CX takes place in three areas: the customer journey, the various points of interaction between the customer and your brand, and the different environments in which interaction takes place – from digital environments to the sales floor.

Contact with your company can be both direct (during the purchase phase) and indirect (advertising, word-of-mouth, news items). Now add to this, the different levels at which the customer experience takes place – physical, sensorial, emotional and rational, and ideological.

Although creating a superior customer experience involves six disciplines – strategy, understanding your customer, informed design, accurate measurement, and governance and culture – this blog focuses on measuring customer experiences, and how your understanding of the role and complexity of the customer experience determines how you choose to measure CX.

Common CX measurements

Two of the most widely used customer experience metrics include: Net Promoter Score (NPS) and Customer Satisfaction Score (CSAT). eValue’s CX survey maps and links theses metrics for comparative demographic analysis and historical trend analysis.

The Net Promoter Score (NPS) – also called the brand or relationship metric – gathers data based on the question: “On a scale of 0 to 10, how likely are you to recommend us to a friend or colleague?” Scores of nine or 10 represent ‘promoters’, seven or eight ‘passives’, and zero to six ‘detractors’. For deeper qualitative feedback that can guide product or service improvements, an NPS can include a more nuanced follow-up question like “Care to tell us why?” Because the NPS draws on a customer’s experiences with your organisation for the entire duration of the relationship, it is mainly used as a predictor of customer loyalty and can be used to inform your customer retention strategy.

An NPS is the best starting point when initiating a customer feedback program, but keep in mind that NPS surveys will give you an overview of customer experience only. For a customer experience measurement at specific touch points or transactions along the customer journey, you’ll want to include the Customer Satisfaction Score (CSAT).

Your CSAT survey gathers data around a recent customer interaction with your company, like a purchase or customer service call, and how satisfied they were with that transaction. CSAT is a popular CX measurement because it can be easily customised to your particular company’s customer experience landscape. It is also highly flexible – allowing for several questions in a longer survey. Responses are then averaged to create an amalgamated CSAT score. It’s recommended that an open-ended follow up question is included – giving people the opportunity to tell you what aspects of satisfaction are working, or not working, for them.

Delving deeper into the customer experience

NPS and CSAT surveys are undoubtedly a valuable source of customer experience metrics. But using a layered approach to measurement will give you a deeper, more nuanced understanding of the landscape and the game-changing insights that could set your company apart.

We recommend delving deeper into your customers’ experiences by using a drip-approach to NPS. This means continuously keeping your finger on the pulse of customer sentiment so you can react to findings in real-time, instead of just once or twice a year, as is common practice.

In the same way, using short CSAT surveys often and when needed, can be used effectively for a specific time period of change, or to identify staff who may need additional training, for example.

When planning to make product or service improvements, include a Product Satisfaction Score (PSAT) in your metrics with a question like: “How satisfied are you with [this product or service]?”

Customer Effort Score (CES) surveys ask the question: “How much effort did it take to have your request handled?” The customer can provide a score of one to 10. A CES can provide insight into your customer churn rate and is a great way to effectively amp up your customer support offering. In fact, CES may be a more accurate forecaster of repurchasing than CSAT.

Putting customer experience metrics to work for you

Measuring customer experience can happen in many ways, using multiple CX metrics in varying combinations. Irrespective of your methodology of choice, to be truly effective, measurement should cover the main categories – quality, satisfaction, loyalty and advocacy. Only then will you have the data needed to manage and nurture long-term customer relationships.

Your next step is deciding ways to improve CX, testing these and coming up with scenarios based on your new results, which in turn inform further changes or improvements, and so on.

Of course, continuous testing and optimisation can be an insurmountable challenge for companies using outdated methodology – like surveys that go out to customers once or twice a year. More agile survey methodologies deliver results in real-time, which is especially important for rapidly evolving products and services. They are also able to process this data and unlock insight from real-time qualitative feedback. Once the “why” behind your experience measurements has been established, you’ll be able to quickly prioritise improvements that will drive your business forward.

Contact us today to learn how eValue can help you unlock the customer insights that lead to genuine customer-centric service and long-term loyal relationships.