Research Studies FAQs

Click on a link below to access that FAQ (frequently-asked question) topic.


Customer Perception Drivers and Timing
Customer Satisfaction
Dealer Markups
End-of-Simulation Research Study Orders
How Much Research Should We Order?
Missing Research Study Numbers
Missing Research Study Results
Prices Reported in Research Study Results
Product Quality Perceptions
Product Quality Perceptions and Sub-Assembly Component Failures
Product Quality Perception Variations Across Regions
Raw Materials, Set-Top Box Configurations, and Product Quality
Research Studies #3 and #21
Research Studies Cost as Reported on Financial Statements
Research Studies Reports
Research Study #9
Research Study #10
Research Study #12 (Retailer Inventory Holdings)
Research Study #14 (Interpretation of Headings)
Research Study #20 (Customer Satisfaction Drivers)
Research Study #21 (Results Missing)
Research Study #23 (No Results Reported Other Than The Specified Configuration)
Research Study #23 (Results Missing)
Research Study #24 (Market Shares)
Research Study #24 (No Results Reported)
Research Study #24 (Price Input For a Price Sensitivity Analysis)
Research Study #24 (Replacement Parts Costs Estimate)
Research Study Ordering Strategy
Retail Prices in Channel 1
Retail- and Direct-Channel Prices [LINKS products simulations with multiple channels]
Timing of Receipt of Research Study Results
Top-Box Score
Unfilled Orders Exceed Regional Demand
Unfilled Orders For One Product Exceed Industry-Wide Unfilled Orders
Varying Research Studies' Results In Successive Executions




Customer Perception Drivers and Timing

“Are product quality, service quality, and availability perceptions a function of current-quarter conditions or previous-quarter conditions?”

Product quality, service quality, and availability perceptions presumably are mostly based on current-quarter conditions, with previous-quarter conditions having some possible residual impact on current-quarter customer perceptions. While this is a reasonable generality, service quality perceptions are more complicated. Since service quality perceptions are based on customer surveys, customers are surveyed this quarter about their service experiences, most of which probably occurred in the previous quarter. Thus, last quarter's conditions (e.g., call center service levels and performance) influence this quarter's service quality perceptions.

revised 09/12/2013
[000113quarter.html]
listed under "Generate Demand"
listed under "Research Studies"
listed under "Service"





Customer Satisfaction

“What drives customer satisfaction for set-top boxes?”

Customer satisfaction is mostly driven by service level. Unfilled orders are another (presumably less important) driver of customer satisfaction. Please view the following video for more details.


revised 10/14/2017
[000247.html]
listed under "Definitions"
listed under "Performance Evaluation"
listed under "Research Studies"
listed under "Service"





Dealer Markups

“What are the dealer markup rates in the various regions?”

You’d need to consult some research studies that report dealer prices (to final end-users). For example, Research Study #14 reports such final prices to end-user customers. In channel #1, you’d need to compare the dealer prices (i.e., the prices at which dealers sell to final customers) for your products with your manufacturer price to estimate the dealer markup rate. There will be some statistical noise in dealer prices, due to natural variation in dealer markups over time and also due to dealer prices being estimated via surveying for which there is always some statistical noise due to the sampling process.

Minor changes in dealer prices from round to round (say, $5-$10) are not indicative of dealers changing their markup rates. Rather, this is just a reflection of the statistical noise in the surveying protocol.

In non-dealer channels, you sell direct to final customers so your manufacturer’s price will be identical to the final price to end-user customers.

revised 09/10/2004
[000195.html]





End-of-Simulation Research Study Orders

“Why should we order any research studies at the end of LINKS?”

If you are required to make a final presentation at the end of LINKS, then you’ll need some relevant current research to prepare your final presentation. You'd look pretty silly basing a final presentation on non-current research. Beside the compelling case for research to support the preparation of a final presentation, you presumably would like to see how things look competitively at the end of LINKS. The bottom line here is simply to order the normal research studies at the end of LINKS that you routinely use to assess your business situation.

revised 10/06/2003
[000158.html]





How Much Research Should We Order?

“We're having trouble deciding on which research studies to order and how much to spend on research studies. Can you provide us with some guidelines about how much we should spend on research studies?”

Order as much research as you can use efficiently and effectively, and no more. Remember, too, that you've probably never seen a headline in The Wall Street Journal (or other major business publication) of the sort "Company Goes Bankrupt For Spending Too Much on Research."

revised 10/08/2002
[000165.html]
listed under "Advice"
listed under "Research Studies"
listed under "Strategy"





Missing Research Study Numbers

“There are some research study numbers missing in our LINKS participant's manual. Why is that?”

Some particular research studies are only available in other LINKS variants. All of the research studies available to your firm are detailed in the participant’s manual for your LINKS variant.

revised 02/04/2007
[000083.html]





Missing Research Study Results

“We ordered many research studies before the last round’s input deadline, but only a few of them have been included in our Word doc output file. Where are the results of the missing research studies?”

All research studies ordered prior to the last LINKS round have been included at the end of your Word doc results file. You may have intended to order other research studies, but such LINKS research study inputs weren't made by the input submission deadline. This is an occasional “mishap” encountered by LINKS participants; discussion of research study ordering occurs within the LINKS team, but all of the intended research study orders are never input to the LINKS Simulation Database.

You may wish to review the “Audit Trace” to see the inputs processed before the last round (i.e., the input changes that your firm made and submitted to the LINKS Simulation Database for the last round). Click on the “Display Audit Trace Logfile” button in the LINKS Simulation Database, on the first web-screen after you log-in to the LINKS Simulation Database with your LINKS firm’s passcode.

Reminder: Research studies orders are one-time only; if you wish to receive a research study “again,” it must be re-ordered each time you wish to receive it.

revised 11/10/2008
[000157.html]
listed under "Research Studies"





Prices Reported in Research Study Results

“Prices are reported in various research study results. Which "price" is being reported, the manufacturer price or the retail price?”

"Price" in research study results is the relevant price in the particular research study in question. Most research studies refer to prices paid by end-users (final customers). If the research study is derived from customer or market surveys, then "price" is the price that is relevant to end-users (final customers). For an indirect channel (like channel #1), retail prices are reported; for a direct channel (like channel #2), the manufacturer sets final prices so manufacturer prices are reported.

The key issue for "prices" is the channel. Throughout LINKS, it's necessary to keep in mind the channel when interpreting "price." In all LINKS documentation, "manufacturer price" is used to describe prices set by manufacturers. In a direct channel (like channel #2), "manufacturer price" is also the final price seen by end-users (final customers). In an indirect channel (like channel #1), "manufacturer price" is the price charged to retailers who then markup "manufacturer prices" to arrive at final retail prices.

revised 10/20/2004
[000109.html]





Product Quality Perceptions

“Why does product 6-1 have a lower product quality rating than product 7-1 in region #1?”

Product quality perceptions are based on customer surveys, so there will be some natural statistical variability in such surveying results. This could explain minor differences in product quality ratings for products 6-1 and 7-1 in region #1. However, the main drivers of product quality perceptions are configuration and failure rates. There are two possible answers to your question: products 6-1 and 7-1 have different configurations or products 6-1 and 7-1 have identical configurations but different failure rates due to varying sub-assembly component suppliers being used. Are you really certain that products 6-1 and 7-1 have identical configurations? Perhaps the competitor's product has been reconfigured so it is no longer the same configuration as your product.

revised 04/18/2001
[000111.html]
listed under "Definitions"
listed under "Research Studies"





Product Quality Perceptions and Sub-Assembly Component Failures

“Do sub-assembly component failures have anything to do with product quality perceptions?”

The answer, in short, is "yes." The perceptions that customers hold about product quality are presumably influenced by a product's configuration and by a product's intrinsic reliability (non-failure, uptime percentage, etc.).

A product's configuration represents its principal benefit to customers. Configuration is why customers purchase a particular set-top box product. A product whose configuration closely matches customers' requirements should generally be perceived as a high quality product.

Failures of sub-assembly components is a secondary factor influencing product quality perception. Sub-assembly component failure rates presumably influence product quality perceptions negatively, with higher failure rates been associated with lower product quality perception.

revised 05/11/2000
[000077.html]
listed under "Procurement"
listed under "Research Studies"





Product Quality Perception Variations Across Regions

“Why do product quality perceptions vary across regions? After all, it's the same product everywhere (i.e., same configuration and same sub-assembly component failure rates).”

You are correct that a single set-top box product only has one configuration anywhere it is sold. Failure rates for a single product should be similar across regions, subject to typical random variations. However, customer preferences may vary across regions (and, potentially, across channels too). Thus, preference for particular configurations don't have to be identical from region to region. And, customers' dispreference for failure can also vary across regions, with some regions being characterized by customers who are more or less concerned with product failures.

revised 09/12/2013
[000112.html]
listed under "Configuration"
listed under "Generate Demand"
listed under "Product Development"
listed under "Research Studies"





Raw Materials, Set-Top Box Configurations, and Product Quality

“What's the best level of alpha and beta to use in set-top box configurations? How do our alpha and beta levels affect the quality of our set-top box products?”

You’ll need to conduct appropriate research to assess customers’ preferences for alpha and beta in set-top boxes. Please view the following video for details.


revised 10/03/2017
[000002.html]
listed under "Configuration"
listed under "Definitions"
listed under "Procurement"
listed under "Product Development"
listed under "Reconfigurations"
listed under "Research Studies"





Research Studies #3 and #21

“What's the difference between Research Study #3 and Research Study #21?”

Research Study #3 reports all current configurations while Research Study #21 reports only the current configurations that you request. Obviously, Research Study #3 will be much more expensive since it's "all" configurations and not just "one" configuration.

If you only want one or a few specific configurations, use Research Study #21. If you really need all configurations, use Research Study #3.

revised 06/15/2006
[000072.html]





Research Studies Cost as Reported on Financial Statements

“The Research Studies cost on our Corporate P&L Statement doesn't match our research studies actually received. What's going on?”

Research studies are executed after the simulation round concludes (i.e., after the financial reports for the simulation round are generated). Thus, research studies billings are lagged one simulation round. For example, on the round-5 financial reports, the research studies received with your round-4 financial reports will be billed.

revised 08/19/2005
[000133.html]
listed under "Financial and Operating Reports"
listed under "Research Studies"





Research Studies Reports

“Where do we find our research studies reports? We ordered lots of research studies with our inputs for the just-completed game run and we're anxious to review the research studies results.”

After each round in LINKS, your firm’s single Word doc output file contains your firm’s financial and operating reports followed by the results of any research studies that your firm ordered.

revised 01/27/2007
[000061.html]
listed under "Financial and Operating Reports"
listed under "Research Studies"





Research Study #9

“It doesn’t appear that our prices are included in the statistics reported in Research Study #9. Is that possible?”

You are probably overlooking the dealer markups involved in channel #1. The prices that you set in channel #1 prices are the manufacturer prices. These prices are marked up by the channel in arriving at final selling prices to end-user customers. Research Study #9 is a study of "market prices", the prices that final end-user customers pay for set-top box products. Thus, the prices in channel #1 reflect dealer markups.

revised 11/21/2004
[000207.html]





Research Study #10

“In reviewing our Q#6 research reports, RS#10 shows unexpected results. In our 7-firm LINKS industry, the frequency of use of one research study is reported as being 7%. But, how can that be in a 7-firm industry? One firm's usage would seem to be 14% usage, for example. And, we ordered RS#12 in Q#6 but the research study usage in RS#10 shows no usage of RS#12. What's going on?”

Research Study #10 is based on the last two quarter's of data, as per the description of Research Study #10 in your LINKS participant's manual. Thus, in a 7-firm industry, a single firm ordering RS#10 once over the last two quarters results in a 7% frequency (7 firms times 2 possible ordering occasions equals a total of 14 possible ordering occasions in the industry, for the last two quarters).

At Q#6, RS#10 results are based on research studies orders in Q#4 and Q#5. RS#10 results are based on "history" and the Q#6 research results haven't yet been fully processed when the RS#10 results are generated for Q#6. So, in the generation and display of RS#10 results in Q#6, Q#4 and Q#5 is the historical period included in the display of RS#10 results.

revised 02/16/2007
[000126quarter.html]





Research Study #12 (Retailer Inventory Holdings)

“What are the retailer inventory holdings reported in Research Study #12?”

The RS#12 retail inventory holdings reference the amount of inventory held by your retailers in channel #1. Retailers purchase set-top boxes from manufacturers to re-sell to final end-user customers. But retailers also like to keep some inventory on-hand to deal with sales variability. Retailers' inventory is owned by them, not by manufacturers. That's why this retailer inventory doesn't appear on the manufacturers' reports.

revised 07/03/2009
[000173.html]
listed under "Research Studies"





Research Study #14 (Interpretation of Headings)

“In Research Study #14, what are the meanings of the headings on the four columns to the right of the market share bar charts?”

Please refer to the footnote below the Research Study #14 output for definitions of the headings used in the data display. Also, you might find it helpful to review the Research Study #14 description in the LINKS participant's manual.

revised 06/16/2006
[000055.html]





Research Study #20 (Customer Satisfaction Drivers)

“What drives customer satisfaction for set-top boxes?”

Customer satisfaction is mostly driven by service level. Unfilled orders are another (presumably less important) driver of customer satisfaction.

But, the full set of potential customer satisfaction drivers for set-top box products includes:

  • "Perceived Value": Is the product’s price worth it?
  • "Product Quality" Perception: Is the product what I want? Is the delivered product what you promised? Frequency/severity of product failure?
  • "Service Quality" Perception: Generally, what customer support is offered when I purchase/use your set-top box product? Specifically, if I have a problem (a product failure) with your set-top box product, will you help me in a convenient/useful fashion?
  • "Availability" Perception: Can I find the product when and where I want it?

    And, since customer satisfaction is a survey-derived measure, typical randomness in survey responses of set-top box customers exists ... so minor variations in customer satisfaction (say, 1% to 2%) across brands probably doesn't reflect meaningful differences in brand performance on customer satisfaction.

    revised 11/02/2012
    [000248.html]
    listed under "Research Studies"





    Research Study #21 (Results Missing)

    “Our Research Study #21 request doesn’t seem to have processed. Why is that?”

    Research Study #21 reports the current configuration of a specific set-top box product. As noted in the research study description, you may request up to four such specific product configuration analyses in any LINKS round. In Research Study #21, results are reported for competitors’ products only. You already know your own products’ configurations (they’re reported at the bottom of the Product P&L Statements). That’s why there were no results for Research Study #21; you’re firm 5 and you requested current configurations for products 5-1 and 5-2, things that you can observe without charge on your Product P&L Statements. Research Study #21 is exists to provide competitors’ products current configurations.

    revised 02/24/2008
    [000137.html]





    Research Study #23 (No Results Reported Other Than The Specified Configuration)

    “We ordered a concept test and no results were reported. Only the original submitted configuration and its associated concept test score (0.1%) are reported. Why are no additional results reported?”

    When there are no reported concept test scores other than for the configuration input to the concept test research study, nothing "near" the specified configuration is better than the specified configuration input to the concept test research study. In Research Study #23, concept test scores are reported for scanned concepts whose scores exceed that of the designated configuration by at least 1%. Your concept-test configuration for Research Study #23 is apparently a very poor configuration for customers and nothing "close" to it is any better. Look elsewhere (i.e., not close to this particular configuration) for a more desirable configuration.

    revised 02/24/2008
    [000218.html]





    Research Study #23 (Results Missing)

    “We ordered a concept test and the results are missing from our research studies output. Please advise.”

    Your concept test requests were not executed because your specified configuration (11101) for your concept tests was invalid. The first character must be H or M, followed by the configuration numbers (e.g., H11101). This also is detailed at the bottom of the RS#23 web-screen. Also, the error messages that generated are impossible to miss, unless you failed to use the "Submit" button to regenerate this web-page after your inputting.

    And, specified configurations to be concept tested must not include "?"s or they are invalid.

    revised 02/28/2017
    [000223.html]





    Research Study #24 (Market Shares)

    “Market share estimates in Research Study #24 for our current prices don’t seem to correspond to our current market share as reported in Research Study #14. Why is this?”

    In Research Study #24, the reported market shares are long-run estimates if you continue with all of your current customer-facing initiatives (configurations, marketing spending, service levels, etc.) as they are now and so do competitors. Market infrastructure issues (like unfilled orders) are not considered. Only your price is "manipulated" in Research Study #24. Thus, Research Study #24 market share estimates will not correspond exactly to your current actual market shares as reported in Research Study #14.

    revised 06/17/2004
    [000183.html]





    Research Study #24 (No Results Reported)

    “We ordered a price sensitivity analysis and no results were reported. Why is that?”

    You didn't specify the full set of required inputs to execute a price sensitivity analysis. In particular, a product number of "None" is invalid; you must specify a particular product number for a price sensitivity analysis. As indicated at the bottom of the RS #24 input web-screen, a product number of "None" will not be executed.

    Even if you are reconfiguring a product as part of a price sensitivity analysis, that reconfiguration must be associated with a specific product number since the price sensitivity analysis is executed for a specific product that is already actively marketed in one or all designated markets.

    revised 01/21/2008
    [000237.html]





    Research Study #24 (Price Input For a Price Sensitivity Analysis)

    “What price do we input for a price sensitivity analysis?”

    In this research study, “Your Price” is the manufacturer price. Your manufacturer price is the price that you input for this research study.

  • In a retail channel (like channel #1), the LINKS software automatically estimates the “Market Price” (including the retail markup). This marked-up price is presented to the final end-user customer in each price sensitivity analysis.
  • In other direct channels, “Your Price” is the final end-user customer price.
  • revised 09/12/2013
    [000240.html]





    Research Study #24 (Replacement Parts Costs Estimate)

    “In Research Study #24 price sensitivity analysis result, the estimate of replacement parts cost is "huge" (i.e., $890). What's going on?”

    The "large" replacement parts costs figure reported in the RS#24 results for product 1-2 (region 3, channel 1) follows from the very low sales estimate. Replacement parts costs arise from sub-assembly component failures in past sales during the warranty period associated with the original sale of a product. With a large warranty (4 for product 1-2), there is a lot of past warranty coverage exposure.

    The total replacments parts costs ($286,346 in the just-completed round for this product, for example) is divided into the number of units sold, as estimated in the price sensitivity analysis results. In this RS#24 price sensitivity analysis, estimated sales are very low (several hundred units). Thus, the division of a large number for replacement parts costs by a very small sales volume estimate results in a very large replacement parts cost per-unit sold in the RS#24 price sensitivity analysis result.

    revised 02/10/2007
    [000110.html]





    Research Study Ordering Strategy

    “We didn’t really use all of the research that we ordered last time. So, we decided not to re-order any research this time. Is this a bad thing to do?”

    Research is about solving "knowledge gaps” profitably. So, I guess if you know everything there is to know that's useful and necessary about competitors, the marketplace, and customers, then research studies would be superfluous (and costly, too). And, of course, in ordering no new research studies, you’d also have to assume that nothing important changes from last time to this time. So, if you know everything and there are no significant changes in your industry, then research studies would certainly be superfluous.

    Now, what are the chances that you know everything and that nothing changes from one round to the next in LINKS?

    revised 10/28/2004
    [000200.html]





    Retail Prices in Channel 1

    “Why do retail prices in channel 1 change from round to round even though our manufacturer price stays constant?”

    Retailers in channel 1 markup manufacturers' prices in arriving at final end-user prices (i.e., retailers' prices). These markup rates vary by region. Retailer markups should be regarded as averages across all retailers. Thus, there will inevitably be some variation through time in reported retailers' prices even if the underlying manufacturers' prices remain constant through time. In addition, retailers' prices are based on surveys of end-user customer prices in channel 1, adding another source of statistical variability to reported retailers' prices.

    revised 08/18/2004
    [000185.html]
    listed under "Definitions"
    listed under "Generate Demand"
    listed under "Research Studies"





    Retail- and Direct-Channel Prices [LINKS products simulations with multiple channels]

    “What relationships should exist between retail-channel prices and direct-channel prices? For example, should direct-channel prices always be less than retail-channel prices?”

    Manufacturers' prices to channel #1 are the retailers’ costs. Retailers markup manufacturer prices (i.e., the retailers’ cost) to their customers, the final end-user set-top box customers. Manufacturers' prices to direct channels (channels other than channel #1) are to final end-user set-top box customers. So, manufacturers' prices will generally be higher in direct channels than in channel #1 to allow for the retailers' markups in channel #1, assuming that a manufacturer’s target final end-user prices are meant to be similar in all channels.

    It's difficult know whether the comparable final end-user prices for set-top box products in channel #1 and in direct channels are predictable as to which would be relatively higher or lower. It seems reasonable to expect that retailers would observe brand-comparable prices in competitive direct channels in their region and would not feel positively inclined toward manufacturers who undercut retailers’ prices in a direct channel.

    But, the larger issue may be channel-set segments. Some customers only purchase set-top boxes through a specific channel. Other customers consider all brand options in all channel options when making purchases. So, if the channel-specific segments are large compared to the joint-channel segment, then there wouldn't be much cross-channel competition.

    revised 12/13/2008
    [000172.html]
    listed under "Definitions"
    listed under "Financial and Operating Reports"
    listed under "Generate Demand"
    listed under "Research Studies"





    Timing of Receipt of Research Study Results

    “When do we receive the results of our research studies orders?”

    The short answer is "order now and receive with your next round's results." Research studies that you order now are executed in connection with the next rounds's game run. These research studies will reflect the results of next round's activities in your industry. Research studies results are reported along with the next round's financial and operating reports.

    revised 10/08/2002
    [000164.html]





    Top-Box Score

    “What's a 'top-box score'?”

    A top-box score is an overall summary measure of customer survey responses to a rating scale. For example, suppose that a four-point rating scale is used to measure customer satisfaction. The points on this rating scale might be described by the verbal anchors "Poor," "Fair," "Good," and "Excellent." Rather than translate these verbal anchors into the numeric scale 1, 2, 3, and 4 and then calculate the associated rating-scale average for the survey responses, we might focus on the percentage of survey respondents who checked the top-box on this four-point rating scale (i.e., the percentage of survey respondents who checked "Excellent"). If 32.1% of survey respondents checked the top-box ("Excellent" in this case), then its top-box score would be 32.1%.

    Rating-scale averages summarize all survey responses while top-box scores focus on the extreme upside of rating scales. Thus, top-box scores are about customer delight not merely satisfaction. Extreme beliefs drive customer behavior and top-box scores focus on measurement of the extreme upside of customer rating scales. Note, also, that customer satisfaction and product/service perception measures are normally only useful in comparison to other similar measures, either at earlier points in time or for competitive products/services at the same point in time.

    revised 06/14/2006
    [000230.html]
    listed under "Definitions"
    listed under "Research Studies"





    Unfilled Orders Exceed Regional Demand

    “Unfilled orders for one of our products exceed industry demand in that region. Is that really possible?”

    Sure, why not. Unfilled orders have nothing to do with actual "filled orders" (i.e., sales). Unfilled orders represent additional potential demand that might have been realized beyond "filled orders" (i.e., sales) if sufficient product supply had been available to meet all customer purchase requests.

    A high level of unfilled orders could also reflect industry-wide double-counting if multiple firms' products in a region simultaneously have unfilled orders. If two products simultaneously have unfilled orders in a region, then some customers undoubtedly would have wished to purchase first one of the products and then the other product when the stockout situation for the first product was encountered. In such a situation, this single customer would have been counted as an unfilled order by both stocked-out products.

    Such a high level of unfilled orders is certainly suggestive of lots of accessible market potential in that region with the current marketing programs (including the current configurations and prices) of all of the actively-distributed products in that region. Presumably, there needs to be some increases in supply (i.e., increased production levels) for those products with unfilled orders, perhaps combined with appropriate price increases to maximize those firms' overall margins in that region.

    revised 11/21/2004
    [000203.html]
    listed under "Generate Demand"
    listed under "Research Studies"





    Unfilled Orders For One Product Exceed Industry-Wide Unfilled Orders

    “The reported unfilled orders for one of our products exceeds the total reported industry-wide unfilled orders. How is that possible?”

    The culprit here is the inventory management behavior of channel #1 dealers combined with rising demand in a region. If dealers stockout, they will reorder in anticipation of future (continuing) rising demand above current sales levels, as well as having to account for their (i.e., dealers’) desired inventory levels in the future. These are the total unfilled orders that manufacturers see arising from channel #1. Industry-wide unfilled orders, as reported in Research Study #12, reference actual final end-user customer stockouts now (not in the future).

    Note, too, that since industry-wide unfilled orders are customer-based, industry-wide unfilled order estimates presumably are based on customer surveys. Such survey-based estimates contain some statistical noise as well as reflecting the potential for biases in customer surveys, especially if there were lots of customers who encountered stockout situations. Why wouldn’t a thoughtful/rational survey respondent claim to have wanted to buy and encountered a stockout situation, to encourage manufacturers to have more plentiful inventory, when no contractual purchase commitment is required within the survey?

    revised 11/21/2004
    [000204.html]





    Varying Research Studies' Results in Successive Executions

    “By mistake, we happened to order two concept tests (Research Study #23) of the same configuration in a particular market. The results were different. How can that be?”

    Any research study in LINKS, like in real life, involving customer surveys or laboratory/field experiments on customers (i.e., "custom" research studies) exhibits normal statistical randomness. Thus, repeated executions of the same research study of this type naturally leads to varying results, reflecting the statistical noise that’s always present in such research. In contrast, LINKS research studies based on competitive benchmarking or the analysis of historical data ("syndicated" research studies) would always yield the same results in multiple executions of the same research studies at the same point in time.

    To further complicate matters, the answer would differ if the same research studies had been executed at different points in time. At different points in time, changes in customer behavior/preferences or changes in the competitive environment could lead to changes in research study results.

    revised 11/21/2004
    [000205.html]




    LINKS® is a registered trademark of Randall G Chapman.  Copyright © 2004-2018.  All rights reserved.