Research Studies FAQs

Click on a link below to access that FAQ (frequently-asked question) topic.


Customer Perception Drivers and Timing
Design Quality Perception Variations Across Regions
End-of-Simulation Research Study Orders
Experience Quality (Service Operations Quality) Perceptions
How Much Research Should We Order?
Missing Research Study Numbers
Missing Research Study Results
Research Studies #3 and #21
Research Studies Cost as Reported on Financial Statements
Research Studies Reports
Research Study #10
Research Study #14 (Interpretation of Headings)
Research Study #23 (No Results Reported Other Than The Specified Configuration)
Research Study #23 (Results Missing)
Research Study #24 (Market Shares)
Research Study #24 (No Results Reported)
Research Study Ordering Strategy
Timing of Receipt of Research Study Results
Top-Box Score
Unfilled Orders Exceed Regional Demand
Varying Research Studies' Results In Successive Executions




Customer Perception Drivers and Timing

“Are design quality, experience quality, and accessibility perceptions a function of current-quarter conditions or previous-quarter conditions?”

Design quality (service design quality), experience quality (service operations quality), and accessibility perceptions presumably are mostly based on current-quarter conditions, with previous-quarter conditions having some possible residual impact on current-quarter customer perceptions. While this is a reasonable generality, experience quality (service operations quality) perceptions are more complicated. Since experience quality (service experience quality) perceptions are based on customer surveys, customers are surveyed this quarter about their service experiences, most of which probably occurred in the previous quarter. Thus, last quarter's conditions (e.g., service levels and performance) influence this quarter's experience quality (service operations quality) perceptions.

revised 09/12/2013
[000113sm.html]
listed under "Generate Demand"
listed under "Research Studies"
listed under "Service"





Design Quality Perception Variations Across Regions

“Why do design quality perceptions vary across regions? After all, it's the same service everywhere (i.e., same configuration).”

You are correct that a single support service only has one configuration anywhere it is sold. However, customer preferences may vary across regions. Thus, preference for particular configurations don't have to be identical from region to region.

revised 09/12/2013
[000112sm.html]
listed under "Configuration"
listed under "Marketing"
listed under "Research Studies"





End-of-Simulation Research Study Orders

“Why should we order any research studies at the end of LINKS?”

If you are required to make a final presentation at the end of LINKS, then you’ll need some relevant current research to prepare your final presentation. You'd look pretty silly basing a final presentation on non-current research. Beside the compelling case for research to support the preparation of a final presentation, you presumably would like to see how things look competitively at the end of LINKS. The bottom line here is simply to order the normal research studies at the end of LINKS that you routinely use to assess your business situation.

revised 10/06/2003
[000158.html]





Experience Quality (Service Operations Quality) Perceptions

“What drives experience quality (service operations quality) perceptions?”

Call center usage rate (lower is better from the customer's viewpoint), CSR salary (higher salary attracts, retains, and motivates more-able service personnel), and turnover (training of new CSRs takes time and energy away from providing customer service) all influence service quality perception.

revised 02/26/2000
[000032sm.html]
listed under "Research Studies"
listed under "Service"
listed under "Service Capacity"





How Much Research Should We Order?

“We're having trouble deciding on which research studies to order and how much to spend on research studies. Can you provide us with some guidelines about how much we should spend on research studies?”

Order as much research as you can use efficiently and effectively, and no more. Remember, too, that you've probably never seen a headline in The Wall Street Journal (or other major business publication) of the sort "Company Goes Bankrupt For Spending Too Much on Research."

revised 10/08/2002
[000165.html]
listed under "Advice"
listed under "Research Studies"
listed under "Strategy"





Missing Research Study Numbers

“There are some research study numbers missing in our LINKS participant's manual. Why is that?”

Some particular research studies are only available in other LINKS variants. All of the research studies available to your firm are detailed in the participant’s manual for your LINKS variant.

revised 02/04/2007
[000083.html]





Missing Research Study Results

“We ordered many research studies before the last round’s input deadline, but only a few of them have been included in our Word doc output file. Where are the results of the missing research studies?”

All research studies ordered prior to the last LINKS round have been included at the end of your Word doc results file. You may have intended to order other research studies, but such LINKS research study inputs weren't made by the input submission deadline. This is an occasional “mishap” encountered by LINKS participants; discussion of research study ordering occurs within the LINKS team, but all of the intended research study orders are never input to the LINKS Simulation Database.

You may wish to review the “Audit Trace” to see the inputs processed before the last round (i.e., the input changes that your firm made and submitted to the LINKS Simulation Database for the last round). Click on the “Display Audit Trace Logfile” button in the LINKS Simulation Database, on the first web-screen after you log-in to the LINKS Simulation Database with your LINKS firm’s passcode.

Reminder: Research studies orders are one-time only; if you wish to receive a research study “again,” it must be re-ordered each time you wish to receive it.

revised 11/10/2008
[000157.html]
listed under "Research Studies"





Research Studies #3 and #21

“What's the difference between Research Study #3 and Research Study #21?”

Research Study #3 reports all current configurations while Research Study #21 reports only the current configurations that you request. Obviously, Research Study #3 will be much more expensive since it's "all" configurations and not just "one" configuration.

If you only want one or a few specific configurations, use Research Study #21. If you really need all configurations, use Research Study #3.

revised 06/15/2006
[000072.html]





Research Studies Cost as Reported on Financial Statements

“The Research Studies cost on our Corporate P&L Statement doesn't match our research studies actually received. What's going on?”

Research studies are executed after the simulation round concludes (i.e., after the financial reports for the simulation round are generated). Thus, research studies billings are lagged one simulation round. For example, on the round-5 financial reports, the research studies received with your round-4 financial reports will be billed.

revised 08/19/2005
[000133.html]
listed under "Financial and Operating Reports"
listed under "Research Studies"





Research Studies Reports

“Where do we find our research studies reports? We ordered lots of research studies with our inputs for the just-completed game run and we're anxious to review the research studies results.”

After each round in LINKS, your firm’s single Word doc output file contains your firm’s financial and operating reports followed by the results of any research studies that your firm ordered.

revised 01/27/2007
[000061.html]
listed under "Financial and Operating Reports"
listed under "Research Studies"





Research Study #10

“In reviewing our Q#6 research reports, RS#10 shows unexpected results. In our 7-firm LINKS industry, the frequency of use of one research study is reported as being 7%. But, how can that be in a 7-firm industry? One firm's usage would seem to be 14% usage, for example. And, we ordered RS#12 in Q#6 but the research study usage in RS#10 shows no usage of RS#12. What's going on?”

Research Study #10 is based on the last two quarter's of data, as per the description of Research Study #10 in your LINKS participant's manual. Thus, in a 7-firm industry, a single firm ordering RS#10 once over the last two quarters results in a 7% frequency (7 firms times 2 possible ordering occasions equals a total of 14 possible ordering occasions in the industry, for the last two quarters).

At Q#6, RS#10 results are based on research studies orders in Q#4 and Q#5. RS#10 results are based on "history" and the Q#6 research results haven't yet been fully processed when the RS#10 results are generated for Q#6. So, in the generation and display of RS#10 results in Q#6, Q#4 and Q#5 is the historical period included in the display of RS#10 results.

revised 02/16/2007
[000126quarter.html]





Research Study #14 (Interpretation of Headings)

“In Research Study #14, what are the meanings of the headings on the four columns to the right of the market share bar charts?”

Please refer to the footnote below the Research Study #14 output for definitions of the headings used in the data display. Also, you might find it helpful to review the Research Study #14 description in the LINKS participant's manual.

revised 06/16/2006
[000055.html]





Research Study #23 (No Results Reported Other Than The Specified Configuration)

“We ordered a concept test and no results were reported. Only the original submitted configuration and its associated concept test score (0.1%) are reported. Why are no additional results reported?”

When there are no reported concept test scores other than for the configuration input to the concept test research study, nothing "near" the specified configuration is better than the specified configuration input to the concept test research study. In Research Study #23, concept test scores are reported for scanned concepts whose scores exceed that of the designated configuration by at least 1%. Your concept-test configuration for Research Study #23 is apparently a very poor configuration for customers and nothing "close" to it is any better. Look elsewhere (i.e., not close to this particular configuration) for a more desirable configuration.

revised 02/24/2008
[000218.html]





Research Study #23 (Results Missing)

“We ordered a concept test and the results are missing from our research studies output. Please advise.”

Your concept test requests were not executed because your specified configuration (11101) for your concept tests was invalid. The first character must be H or M, followed by the configuration numbers (e.g., H11101). This also is detailed at the bottom of the RS#23 web-screen. Also, the error messages that generated are impossible to miss, unless you failed to use the "Submit" button to regenerate this web-page after your inputting.

And, specified configurations to be concept tested must not include "?"s or they are invalid.

revised 02/28/2017
[000223.html]





Research Study #24 (Market Shares)

“Market share estimates in Research Study #24 for our current prices don’t seem to correspond to our current market share as reported in Research Study #14. Why is this?”

In Research Study #24, the reported market shares are long-run estimates if you continue with all of your current customer-facing initiatives (configurations, marketing spending, service levels, etc.) as they are now and so do competitors. Market infrastructure issues (like unfilled orders) are not considered. Only your price is "manipulated" in Research Study #24. Thus, Research Study #24 market share estimates will not correspond exactly to your current actual market shares as reported in Research Study #14.

revised 06/17/2004
[000183.html]





Research Study #24 (No Results Reported)

“We ordered a price sensitivity analysis and no results were reported. Why is that?”

You didn't specify the full set of required inputs to execute a price sensitivity analysis. In particular, a service number of "None" is invalid; you must specify a particular service number for a price sensitivity analysis. As indicated at the bottom of the RS #24 input web-screen, a service number of "None" will not be executed.

Even if you are reconfiguring a service as part of a price sensitivity analysis, that reconfiguration must be associated with a specific service number since the price sensitivity analysis is executed for a specific service that is already actively marketed in one or all designated markets.

revised 01/21/2008
[000237sm.html]





Research Study Ordering Strategy

“We didn’t really use all of the research that we ordered last time. So, we decided not to re-order any research this time. Is this a bad thing to do?”

Research is about solving "knowledge gaps” profitably. So, I guess if you know everything there is to know that's useful and necessary about competitors, the marketplace, and customers, then research studies would be superfluous (and costly, too). And, of course, in ordering no new research studies, you’d also have to assume that nothing important changes from last time to this time. So, if you know everything and there are no significant changes in your industry, then research studies would certainly be superfluous.

Now, what are the chances that you know everything and that nothing changes from one round to the next in LINKS?

revised 10/28/2004
[000200.html]





Timing of Receipt of Research Study Results

“When do we receive the results of our research studies orders?”

The short answer is "order now and receive with your next round's results." Research studies that you order now are executed in connection with the next rounds's game run. These research studies will reflect the results of next round's activities in your industry. Research studies results are reported along with the next round's financial and operating reports.

revised 10/08/2002
[000164.html]





Top-Box Score

“What's a 'top-box score'?”

A top-box score is an overall summary measure of customer survey responses to a rating scale. For example, suppose that a four-point rating scale is used to measure customer satisfaction. The points on this rating scale might be described by the verbal anchors "Poor," "Fair," "Good," and "Excellent." Rather than translate these verbal anchors into the numeric scale 1, 2, 3, and 4 and then calculate the associated rating-scale average for the survey responses, we might focus on the percentage of survey respondents who checked the top-box on this four-point rating scale (i.e., the percentage of survey respondents who checked "Excellent"). If 32.1% of survey respondents checked the top-box ("Excellent" in this case), then its top-box score would be 32.1%.

Rating-scale averages summarize all survey responses while top-box scores focus on the extreme upside of rating scales. Thus, top-box scores are about customer delight not merely satisfaction. Extreme beliefs drive customer behavior and top-box scores focus on measurement of the extreme upside of customer rating scales. Note, also, that customer satisfaction and product/service perception measures are normally only useful in comparison to other similar measures, either at earlier points in time or for competitive products/services at the same point in time.

revised 06/14/2006
[000230.html]
listed under "Definitions"
listed under "Research Studies"





Unfilled Orders Exceed Regional Demand

“Unfilled orders for one of our services exceed industry demand in that region. Is that really possible?”

Sure, why not. Unfilled orders have nothing to do with actual "filled orders" (i.e., sales). Unfilled orders represent additional potential demand that might have been realized beyond "filled orders" (i.e., sales) if sufficient service capacity had been available to meet all customer purchase requests.

A high level of unfilled orders could also reflect industry-wide double-counting if multiple firms' services in a region simultaneously have unfilled orders. If two services simultaneously have unfilled orders in a region, then some customers undoubtedly would have wished to purchase first one of the services and then the other service when the unfilled-order situation for the first service was encountered. In such a situation, this single customer would have been counted as an unfilled order by both unfilled-order services.

Such a high level of unfilled orders is certainly suggestive of lots of accessible market potential in that region with the current marketing programs (including the current configurations and prices) of all of the actively-distributed services in that region. Presumably, there needs to be some increases in supply (i.e., increased service capacity) for those services with unfilled orders, perhaps combined with appropriate price increases to maximize those firms' overall margins in that region.

revised 11/21/2004
[000203sm.html]
listed under "Marketing"
listed under "Research Studies"
listed under "Service Capacity"





Varying Research Studies' Results in Successive Executions

“By mistake, we happened to order two concept tests (Research Study #23) of the same configuration in a particular market. The results were different. How can that be?”

Any research study in LINKS, like in real life, involving customer surveys or laboratory/field experiments on customers (i.e., "custom" research studies) exhibits normal statistical randomness. Thus, repeated executions of the same research study of this type naturally leads to varying results, reflecting the statistical noise that’s always present in such research. In contrast, LINKS research studies based on competitive benchmarking or the analysis of historical data ("syndicated" research studies) would always yield the same results in multiple executions of the same research studies at the same point in time.

To further complicate matters, the answer would differ if the same research studies had been executed at different points in time. At different points in time, changes in customer behavior/preferences or changes in the competitive environment could lead to changes in research study results.

revised 11/21/2004
[000205.html]




LINKS® is a registered trademark of Randall G Chapman. All rights reserved.               Copyright © 2004-2018 by Randall G Chapman. All rights reserved.