Skip to main content


Genomic testing to determine drug response: measuring preferences of the public and patients using Discrete Choice Experiment (DCE)

Article metrics



The extent to which a genomic test will be used in practice is affected by factors such as ability of the test to correctly predict response to treatment (i.e. sensitivity and specificity of the test), invasiveness of the testing procedure, test cost, and the probability and severity of side effects associated with treatment.


Using discrete choice experimentation (DCE), we elicited preferences of the public (Sample 1, N = 533 and Sample 2, N = 525) and cancer patients (Sample 3, N = 38) for different attributes of a hypothetical genomic test for guiding cancer treatment. Samples 1 and 3 considered the test/treatment in the context of an aggressive curable cancer (scenario A) while the scenario for sample 2 was based on a non-aggressive incurable cancer (scenario B).


In aggressive curable cancer (scenario A), everything else being equal, the odds ratio (OR) of choosing a test with 95% sensitivity was 1.41 (versus a test with 50% sensitivity) and willingness to pay (WTP) was $1331, on average, for this amount of improvement in test sensitivity. In this scenario, the OR of choosing a test with 95% specificity was 1.24 times that of a test with 50% specificity (WTP = $827). In non-aggressive incurable cancer (scenario B), the OR of choosing a test with 95% sensitivity was 1.65 (WTP = $1344), and the OR of choosing a test with 95% specificity was 1.50 (WTP = $1080). Reducing severity of treatment side effects from severe to mild was associated with large ORs in both scenarios (OR = 2.10 and 2.24 in scenario A and B, respectively). In contrast, patients had a very large preference for 95% sensitivity of the test (OR = 5.23).


The type and prognosis of cancer affected preferences for genomically-guided treatment. In aggressive curable cancer, individuals emphasized more on the sensitivity rather than the specificity of the test. In contrast, for a non-aggressive incurable cancer, individuals put similar emphasis on sensitivity and specificity of the test. While the public expressed strong preference toward lowering severity of side effects, improving sensitivity of the test had by far the largest influence on patients’ decision to use genomic testing.


Treatment options for cancer are mainly chosen based on the classification of the tumor and are usually based on the best knowledge of histogenesis, histological type, and stage of disease [1]. However, these criteria often fail to accurately differentiate among distinct subtypes of tumors, especially with respect to likelihood of response to treatment, forcing clinicians and patients to choose empirically. Thus, many patients end up experiencing significant side effects of chemotherapy without receiving clinical benefit [2].

Recent advances in genomics have created hope that genomic testing may help to identify patients who will likely respond to a particular drug and/or experience side effects. This information is valuable both for patients and physicians when choosing among possible treatment options and trading off between risks and benefits. For example, panitumumab, a drug for the treatment of colon cancer, was initially shown to be effective only in 10% of cases. However, genomic testing revealed that response rates were much higher in those without a KRAS mutation in their tumor [3]. Other examples are HER2 expression in breast cancer patients, which predicts response to trastuzumab [4] and the BCR-ABL genotype in chronic myeloid leukemia, which predicts response to imatinib mesylate [5].

Despite some clear advantages for the use of genomic tests to predict response to therapy, there are also some limitations. Genomic test results often have a probabilistic relationship with drug response – a certain genotype in the tumor may increase (or decrease) the probability of treatment response but this relationship is rarely absolute [3]. This prediction error in genomic testing may lead to the misclassification of those that will respond (i.e. sensitivity and specificity of tests are not perfect). In practice, the extent that an imperfect genomic test will be used is affected by multiple factors. Patients and physicians consider various factors such as invasiveness of the testing procedure, probability and severity of associated side effects of the treatment, and the overall costs before deciding about the usefulness of a genomic test [6, 7].

The other important challenge is the impact of genomic testing on health care costs. An increasing number of diagnostic and predictive tests as a result of advances in genomics are creating increasing pressure on already soaring health care costs. There are ongoing debates about added clinical and economic value of these new technologies and appropriate methods for measuring those potential benefits [8, 9]. New genomic tests, even if proven to deliver clinical benefit, are rarely cost saving. Thus, the decision about their overall value should be made based on the appropriate balance between clinical benefits and the costs of these technologies. In this context, it is important to determine which attributes of a genomic test are of more importance for patients when deciding about their treatment options. In general, approval and use of genomic tests varies widely across different jurisdictions and for different populations. Publicly (or privately) funded health care benefit providers are often interested in learning about tax payers’ (or privately insured populations’) opinion about the value of these genomic tests. Knowledge about these preferences will enable health benefit providers to select genomic tests with the highest perceived value when making funding decisions. This information can be used to prioritize future research areas and suggest aspects of genomic testing where improvement will have the most value to patients. Finally, this investigation may offer further insight about perceptions of patients who have directly experienced the disease and about their evaluation of different aspects of testing for cancer treatment. This information can potentially help physicians to offer treatment options that better match patients values and preferences [10].

Using a discrete choice experiment (DCE), we explored the relative impacts (i.e. relative preference weights) of different attributes of a genomic test on individuals’ decision to use the test for guiding cancer treatment. We investigated whether these relative impacts are influenced by type of cancer and its prognosis. Finally, we investigated how these relative impacts may differ between cancer patients and the public. Our knowledge about these relative preference weights can offer a value-based framework [11] for evaluating and comparing new genomic tests.


Study sample

Two samples from the public (sample 1 and sample 2) and a sample of current or former cancer patients participated in this study. The samples from the public (sample 1 and sample 2) were recruited by Ipsos Reid (Vancouver, British Columbia) and were representative of the Canadian general population in terms of demographics and socio-economic characteristics. The third sample (sample 3) consisted of current or former lymphoma patients who had voluntarily agreed be contacted about research projects in British Columbia (BC), Canada.

All subjects were invited to participate in this web-based study through email. All participants were at least 19 years old and were able to read and write in English. In the initial letter, we provided a brief description of the study and invited individuals to participate. Once they agreed, each participant provided informed consent and then followed a web link to the online questionnaire. Participants could choose not to answer any of the questions or withdraw at any point. The protocol for this study was reviewed and approved by the University of British Columbia - British Columbia Cancer Agency (BCCA) Research Ethics Board.

Study procedure

At the beginning of the DCE questionnaire, we described one of two possible scenarios to the participants. We asked participants to imagine a situation where they have been diagnosed with either an aggressive curable cancer (scenario A) or a non-aggressive incurable cancer (scenario B) and that they have the option to choose a genomic test that can predict the likelihood of their response to a new chemotherapy (Table 1). We explained that the genomic test had limited accuracy, which might result in false negative (misclassifying responders as non-responders) and false positive (misclassifying non-responders as responders) predictions. Finally, we explained the attributes and levels in the DCE questionnaire (Table 2) and asked participants to complete 16 choice tasks [12, 13]. We used the same choice questions for all three samples, but varied the underlying form of cancer described for one of the samples from the public: the preamble in the questionnaire described an aggressive curable cancer (scenario A) to participants in the first sample from the public and the sample from patients, and a non-aggressive, incurable cancer (scenario B) as the scenario for the second sample from the public. The design of the DCE questionnaire has been explained in the next section and a sample choice task has been presented in Table 3.

Table 1 Scenarios for DCE
Table 2 Attribute and levels included in the DCE questionnaire
Table 3 A sample choice task

The extent to which a genomic test will be used in practice is affected by the perceived benefits, risks and costs of using the genomic test. As such, in the DCE questionnaire participants needed to make a trade-off between the consequences of not taking the new chemotherapy when in fact it was beneficial, experiencing additional side effects of new chemotherapy without receiving any clinical benefit, the invasiveness of the genomic testing procedure, the test turnaround time, and the cost of the genomic test.

The descriptions at the beginning of the questionnaire explicitly stated that in the absence of a genomic test, all patients would be offered the new chemotherapy. As such, choosing the “neither” option in a choice task implied a respondent’s preference for opting-out from genomic testing and taking the new chemotherapy regardless of the likelihood of response. We did not specify the type of cancer, treatment, and the associated genomic test to increase the generalizability of the results. Nonetheless, the sample of patients in this study were former and current lymphoma patients in British Columbia, and the disease descriptions provided in the DCE questionnaires were similar to aggressive and non-aggressive types of lymphoma.

Questionnaire design

Discrete choice experiment is a method to elicit individuals’ strength of preferences for different aspects of a health intervention (or a product in general). The concept of DCE is based on Random Utility Theory and the assumptions that: 1) a health care intervention (or any product or service in general) can be characterized by several attributes; and 2) individuals choose among available health interventions (or products or services) by evaluating and comparing their attributes [1214]. These attributes can describe health outcomes (e.g. test accuracy, likelihood or severity of treatment side effects) or intervention process (cost, test procedure, or turnaround time).

In this study, we assumed that a genomically-guided cancer treatment could be described by seven attributes (Table 2). Considering that a large number of (hypothetical) treatment options can be generated by using these attributes and all various combinations of their levels (i.e. full factorial design), we implemented a fractional factorial design where we selected 10 versions of the DCE questionnaire each consisting of only 16 choice tasks. Therefore, each respondent had to complete a randomly assigned version of the DCE questionnaire that contained 16 choice tasks. In each choice task, respondents had to choose between two treatment options and a neither option. A sample choice task has been presented in Table 3 and the complete DCE questionnaire can be found in Additional file 1. The efficiency of our fractional factorial design was assured using simulation of responses. We generated large number of possible designs and then selected the design that provided the most precise coefficient estimates (i.e. smallest standard errors) and a better D-efficiency given the sample size [14, 15]. The statistical design of the questionnaire ensured that a random selection of responses would result in preference weights that are not statistically different from zero (i.e. non-informative coefficient estimates).

Several sources were used for selection of attributes including published literature, physicians’ opinion, and feedbacks that we received from three pilot surveys. We identified several studies that had investigated characteristics of pharmacogenomic testing and their impact on patients’ and physicians’ decisions for utilizing them [1618]. We compiled a list of attributes based on the results of these studies and discussed this list with physicians who were in direct contact with cancer patients in the BC cancer agency. We then selected the seven attributes deemed to have the greatest influence on patient’s decisions about treatment options. These attributes and levels were then tested in a pilot study where 7 former cancer patients and 50 individuals from the public completed the preliminary version of the DCE questionnaire. By analyzing the data in the pilot phase, we examined rationality and consistency of the responses and whether the estimated coefficients conformed to our prior expectation in terms of direction and sign. Our prior expectation was based on the assumption that individuals’ preferences (and willingness to pay) decrease by decreasing sensitivity and specificity of the test, and by increasing severity and likelihood of side effects, turnaround time, cost, or invasiveness of the testing procedure. Using this approach, we ensured that the respondents understood the content of the DCE questionnaire and our instructions for completion of choice tasks. Furthermore, we used the comments provided by respondents at the end of the questionnaires to hone the preamble, descriptions, attributes, and levels used in the final version of the questionnaire.

Two out of 16 choice tasks in the DCE questionnaire contained a clearly dominant option. By checking answers to these fixed choice tasks, we tested if respondents actually read and understood the DCE questionnaire. These fixed choice tasks are usually part of the DCE questionnaire design in order to verify consistency and rationality of responses. We also included the “neither” option in the choice tasks to provide the possibility to opt-out whenever none of the presented alternatives was adequately attractive to the respondent. Thus, we avoided forcing non-demanders to choose an alternative and ensured estimation of unconditional rather than conditional preferences [14]. The design of the web-based questionnaire, which facilitated direct data entry into our secured server, was done using the Choice Based Conjoint (CBC) application of Sawtooth (Sawtooth software Inc, SSI web version 6.6.6).

Statistical analysis

Assuming the general framework used in random utility theory [14], given a set of options, the log odds ratio of choosing one of the options is proportional to a linear function of attributes of that option. Therefore, by gathering stated choice data using a DCE questionnaire with a known statistical design and by knowing attributes and levels presented in each choice task, the coefficients of attributes can be estimated using generalized linear models. These coefficients, also known as relative preference weights, reflect average impact of attribute levels on likelihood of being chosen as the preferred option. Also the ratio of coefficients can be interpreted as marginal rate of substitution (MRS) between any two attributes. By inclusion of cost as an attribute in the DCE questionnaire, the marginal rates of substitution between each attribute and cost, also known as Willingness to Pay (WTP) [14], can be calculated. WTP can provide useful interpretations for estimated preference weights as they indicate how much individuals on average are willing to pay to receive a certain amount of change in one of the attribute levels [14]. The odds ratios (OR) associated with each attribute levels also were calculated. These odds ratios suggest, given two options with the same attribute levels, how a change in one of the attribute levels will affect the odds of becoming the preferred choice.

The choice data were effect-coded for attributes with discrete values, with the exception of cost, which was modeled as a continuous variable [19]. Effect coding of choice data, instead of continuous coding, relaxes linearity assumptions and allows detecting non-linearity of preference weights in regards to different levels of an attribute. Also modeling cost as a continuous variable allowed us to estimate WTP values in a way that is easy to interpret. An alternative specific variable was dummy coded and indicated the situations where “neither” was chosen [14, 20]. The choice data was analyzed using PROC MDC, SAS 9.2. We pooled the choice data from two samples from the public who completed the questionnaire under scenario A and scenario B and estimated a conditional logit model using choice as the dependent variable. We defined a dummy variable that indicated the scenario in the pooled data. By including interaction terms between this dummy variable and attribute levels in the regression analysis, we compared the estimated preference weights across two samples from the public. We also used the same approach to compare estimated preference weights in the samples from the public and patients who had both completed the questionnaire under scenario A. However, prior to doing this analysis, we used the propensity score method to select a subsample of the public that were similar to the sample of patients in terms of age, education, income, and having dependent children. Considering that the characteristics of patients in our sample were different from the public, using propensity scores was necessary to increase comparability of the results across the samples from the public and patients in our analysis.

There are a variety of statistical methods for the analyses of DCE data that range from conditional logit models to Bayesian mixed logit models [21] and Latent Class Analysis (LCA) [22]. Critical assessment of these methods can be found elsewhere [23]. We chose conditional logit model for analyses of the DCE data in this study. However, we verified the estimated results and robustness of our findings by re-running the regressions using a mixed logit model.


Sample characteristics

Invitations were initially sent to 904 and 836 individuals from the public for participation in the study under scenarios A and B, respectively. Although 588 (65%) individuals in scenario A and 578 (69%) individuals in scenario B provided their responses to the questionnaires, some of the questionnaires contained uncompleted choice tasks. To avoid potential bias as a result of imbalanced frequency of responses, we decided to restrict our analysis to the data from questionnaires with complete responses to all choice tasks (533 individuals in scenario A and 525 individuals in scenario B). Our sample of patients was limited to an email list provided by BC cancer Agency (BCCA). We initially contacted a list of 84 patients through email and 54 (64%) patients agreed to participate in this study. However, after excluding incomplete responses, we had choice data from 38 patients for the final analysis.

Table 4 has summarized the characteristics of the participants in the three samples. Mean age in the sample of patients was 58.2 years, about 10 years higher than in the samples from the public. Also 36.1% of individuals in the sample from patients reported a household income of ≥ Can $125,000 (This rate was 6.6% and 5.5% in the samples from the public). Patients who participated in this study were also highly educated and 32.4% had a master or doctorate degree (the proportions of individuals with master or doctorate degree were 2.5% and 4.1% in the samples from the public under scenario A and scenario B, respectively).

Table 4 Characteristics of participants

Estimation results

Comparing preferences of the public under two scenarios A and B

The estimated preference weights, odds ratios, and the WTP associated with the levels in each attribute have been reported in Table 5.

Table 5 Estimated preference weights and Willingness to Pay (WTP) in samples from the public

The results suggested that in aggressive curable cancer (scenario A), the preference weight of the public for “sensitivity: 50%” was −0.1686 (s.e. 0.466) and it increased to 0.1748 (s.e. 0.0266) for a test with “sensitivity: 95%” (Table 5). Alternatively, the impact of test sensitivity on respondent’s choice is evident in the reported ORs and WTPs. For example, everything else being equal, the odds of choosing a test with 95% sensitivity were 1.41 times the odds of choosing a test with 50% sensitivity and they were willing to pay $1331 for increasing test sensitivity from 50% to 95%. However, they were willing to pay only $796 and $487 for increasing sensitivity to 80% and 65%, respectively. In non-aggressive incurable cancer (scenario B), preference weights of “sensitivity: 95%” and “sensitivity: 50%” were 0.2577 (s.e. 0.270) and −0.2436 (s.e. 0.0479), respectively. Increasing sensitivity from 50% to 95% increased the odds of choice by 1.65 times. Although this preference weight in scenario B was larger compared to scenario A (0.2577 vs. 0.1748, difference p-value = 0.0241), corresponding willingness to pay values were comparable ($1331 vs. $1344 in scenario B and A, respectively). Preference weights and WTPs for a test with sensitivity of 80% or 65% in scenario B were not significantly different from corresponding values in scenario A.

In scenario A, the odds of choosing a test with 95% specificity were 1.24 times the odds of choosing a test with 50% specificity and the public was willing to pay $827 for this amount of improvement in specificity level. The preference weight for 95% specificity was more than two-fold larger under scenario B compared to scenario A (0.2452, 0.1008, difference p-value < 0.001). Therefore, under scenario B, the odds of choosing a test with 95% specificity were 1.50 times the odds of choosing a test with 50% specificity and the corresponding WTP was $1080. Also in scenario B, the preference weight of 65% specificity was negative (−0.1251) and statistically different (difference p-value = 0.0115) from its counterpart under scenario A (0.0051). The public perceived little value in increasing specificity from 50% to 65% in scenario B.

Reducing severity of treatment side effects from severe to mild was associated with large ORs in both scenarios (OR = 2.10 and 2.24 in scenario A and B, respectively). The public was willing to pay as much as $2882 and $2165 to receive a treatment with mild rather than severe side effects in aggressive curable cancer (scenario A) and non-aggressive incurable cancer (scenario B), respectively. Furthermore, the odds of choosing a treatment with 5% likelihood of side effects were 1.62 and 1.75 times the odds of choosing a treatment with 95% likelihood of side effects in scenario A and B, respectively.

Shortening test turnaround time from 12 days to either 7 days or 2 days had the smallest impact on preference weights, ORs, and WTPs under both scenarios. In contrast, the level of invasiveness of the testing procedure had a large impact on estimated preference weights, ORs, and WTP values in both scenarios. For example, the public was willing to pay $2162 and $1474 for a genomic test that could be performed using a mouth swab rather than one involving a liver biopsy in scenario A and B, respectively.

Individuals from the public had negative preference weights for opting out from genetic testing (i.e. choosing “neither” option). The preference weight was a larger negative number under scenario A compared to scenario B (−0.6323 in scenario A vs. -0.4967 in scenario B, difference p-value = 0.0169). The ORs of opting-out from genetic testing (vs. taking a test) were 0.53 and 0.61 in scenario A and B, respectively. The public had a larger WTP for having a test in aggressive curable cancer scenario ($2451) compared with non-aggressive incurable cancer ($1332). Finally, the preference weight for “genetic test cost” was a larger negative number under scenario B compared to scenario A (−0.00026 in A vs. -0.00037 in B, difference p-value = 0.0091), indicating that the public was more sensitive to price in scenario B.

Comparing preferences of the public with preferences of patients under scenario A

Using propensity scoring we identified a subsample of the public (N = 83) who had similar characteristics to patients (N = 38) in terms of age, education, income, and number of dependent children. Next we pooled the data from two samples (N = 121) and fitted a conditional logit model to estimate preference weights, ORs, and WTPs associated with each attribute levels (Table 6).

Table 6 Estimated preference weights and Willingness to Pay (WTP) in a propensity score matched subset of the public and patients

The preference weight of patients for “sensitivity: 95%” was significantly larger compared to the public (0.2480 in the public vs. 0.8794 in patients, difference p-value < 0.001). This large difference in preference weights for “sensitivity: 95%” also translated into large differences in WTP estimates ($2,658 for the public vs. $12,820 for patients) and ORs (1.53 vs. 5.23, respectively). Patients’ had consistently larger preference weight for better sensitivity and specificity of the test, as was evident based on ORs and WTP values associated with different levels of sensitivity. Among patients, the odds of choosing a genetic test that requires “mouth swab” were 2.43 times the odds of a test that needs liver biopsy. Patients also preferred a test that involves “Bone marrow biopsy” instead of “liver biopsy” (OR = 1.76), while the public considered both types of biopsies equally unfavorable (OR = 1.04). There was a large difference between preference weight of the public versus patients for opting-out from the test (−1.0346 in the public vs. -0.1185 in patients, difference p-value = 0.0002). Consequently, the public was willing to pay as high as $6050 for having a genetic test while patients’ WTP for genetic testing was only $919. This indicated patients had significantly less aversion to opting out of genomic testing.


This study shows the relative impact of different properties of genomically-guided cancer treatment on test uptake. Change in severity and likelihood of side effects as well as the test procedure have the largest influence on the public’s decision to use genetic testing. In contrast, improving sensitivity of the test had a larger influence on patients’ decision to use genomic testing.

The type of cancer and its prognosis also influenced the preferences of the public for different attributes of genomic testing. When we compared the results in the two samples from the public, we found that in aggressive curable cancer, individuals emphasized the sensitivity rather than specificity of the test. In contrast, for a non-aggressive incurable cancer, individuals put similar emphasis on the sensitivity and specificity of the test and expressed strong (positive and negative) preferences toward (high and low) specificity of the test. Furthermore, under this scenario (non-aggressive incurable cancer) the public also had a larger negative preference toward the cost of genomic testing. Because for a non-aggressive incurable cancer the change in the survival is ultimately small and is expected to be materialized after 13 years, this lead to the public discounting the benefits of new chemotherapy and becoming more selective about accuracy of genomic testing in this scenario.

Our study suggests that patients and the public have different perceptions about the value of various aspects of genomic testing to guide cancer treatment when facing an aggressive curable cancer. Based on our results, patients were mostly concerned about improving sensitivity of the test (and presumably their survival chance), and in the absence of an adequately sensitive test they preferred opting-out from genomic testing and taking the treatment regardless of its side effects. Conversely, the public had a large negative preference weight for opting-out from genomic testing suggesting that they are more inclined to use a test even with inadequate accuracy. This information may help physicians to tailor their clinical advice considering type of cancer and previous experience of their patient with cancer treatment. For example, if the prognosis of disease is expected to be similar to our scenario for non-aggressive incurable cancer, then perhaps discussing false positive rates of available tests can be of great importance for the average patient. Also, the observed differences between preferences of patients and the public about different biopsy procedures suggest that perhaps physicians can help patients who have no prior experience of cancer treatment in developing a more realistic perception about the relative invasiveness of these procedures.

There is a paucity of studies about preferences for characteristics of genomic testing. The increasing number of new genomic tests ensuing from fast developments in genomic sciences underlines the need for further investigations in this area. Knowledge about strength of preferences toward different attributes of genomic testing can lead us toward value-based evaluation of these new technologies. In health care systems that rely on public funding resources, by considering these preference weights in funding decisions, genomic tests with potentially higher value for a covered population can be determined. In addition, physicians can have better understanding about patients’ priorities given the type and prognosis of the disease. The differences in preferences of patients and the public shown in our study also suggests areas that physicians should emphasize when communicating with recently diagnosed patients who presumably have no prior experience of the disease. In a study conducted by Griffith et al., willingness to pay for receiving breast cancer genomic services was estimated by conducting a DCE on 242 individuals with high, moderate, and low risk of developing breast cancer [24]. Using a DCE and following a rigorous methodology, Hall and colleagues [25] explored the factors that influenced participation in genomic carrier testing for Tay Sachs and cystic fibrosis among a sample from the general community and a sample of the Ashkenazi Jewish community. A recent study [26] also used DCE to estimate the tradeoffs among sensitivity, turnaround time, and cost of a postnatal genomic test to predict genomic abnormalities causing mental retardation in children. Finally, in a study done by Herbild et al. [27], they elicited preferences in the Danish general population for taking a pharmacogenomic test that could improve treatment of depression.

Patients’ emphasis on sensitivity also has been shown in the context of using usual screening tests for colorectal cancer [28]. In exploring preferences of 1047 patients with a history of colorectal cancer for different screening modalities, Marshall et al. used a DCE and estimated how likelihood of uptake may be affected by different characteristics of the test. Similar to our results, they found that sensitivity of the test has the largest impact on the likelihood of uptake among these patients. A cross sectional survey study by Haga et al. also showed that primary care physicians consider the severity of side effects followed by predictive accuracy of a phramacogenomic test as the factors that have the largest influence on their decision to prescribe it to their patients, while turnaround times have a smaller influence on their decision for using pharmacogenomics testing [16]. These results, when considered in the context of our findings, suggest that perhaps neither the public nor physicians share patients’ highest priority for better test sensitivity. Direct comparison of physicians and patients preferences about genomic testing can provide useful insight about this matter and should be pursued further in future research.

The distinct characteristic of our study is utilizing three samples to demonstrate how the type of cancer and its prognosis affected preferences for a genomic test, and how preferences of patients differed from those of the public. Also, in contrast with previous studies, the results of our study are applicable to most genomic tests for guiding cancer treatment, as we did not specify the type of cancer, treatment, or the associated genomic test. However, we acknowledge that in the absence of specifying the type of cancer, participants may make various assumptions about possible prognosis and potential outcomes. Therefore, this can be seen as a limitation of our study as well. Throughout this study, participants provided their choices considering the following assumptions: 1) if they decided to opt-out from genomic testing, they would receive the new treatment regardless of its effect, and 2) the new treatment was covered by their insurance policies. We acknowledge that under different circumstances in terms of the effect of genomic testing on access to the new treatment, the current results may not apply. The larger standard errors around the estimated coefficients in patients suggested that this sample was slightly underpowered. However, the sample size was restricted to a list of lymphoma patients in BC cancer agency’s contact list and willingness of those approached to participate and thus could not be increased. Despite this limitation, all of the point estimates in the sample of patients were in line with our prior expectations in terms of the order of their magnitudes and corresponding signs. Moreover, this sample was not an archetypal sample of cancer patients in BC, as they had high income, high education level, and were 10 years older on average. Therefore, we used propensity scoring to find a subsample of the public with similar characteristics to increase comparability of the results. This issue, however, potentially limits the external validity of the results based on these samples. Actual decisions that patients or the public make in real life situations may deviate from their stated preference in surveys like ours. This effect has been shown in the context of genetic testing as well [29]. However, several studies provide evidence suggesting strong correlation between stated and real WTP [30] and preferences [31]. Answering DCE questions can be a complex task and accuracy of responses may eventually depend on participants’ numeracy level (i.e. ability to interpret quantitative information) [32], language skills, familiarity of subject, and attentiveness while completing the questionnaire. We have used several standard approaches to assure quality of the data by including a fixed choice task to test rationality of responses and by checking the time that each respondent spent on completing the questionnaire. Overall, given the directions and signs of the estimated preference weights, we believe that our results are robust and have not been compromised by these potential problems. Finally, we acknowledge that the factors that can affect uptake of a genomic test are not limited to the seven attributes that we have included in the current DCE design. We excluded several important aspects (e.g. risk involved in testing procedure) that individuals may take into account when making their actual decision about using genomic testing. This selection was to use the minimum possible number of attributes and avoid overly complex choice tasks [33].

Our study demonstrates individuals’ preference strength toward characteristics of a genomic test when they are faced with an aggressive but curable cancer versus a non-aggressive and incurable cancer. Additionally, these results suggest which characteristics of genomic testing have a larger potential value for society and patients. Physicians may find these average preferences as a benchmark when providing treatment advice about pharmacogenomics testing to cancer patients. These preference weights also can be used to inform funding decisions by incorporating relevant populations’ valuation of different aspects of genomic testing.


We explored the relative impact of different properties of genomically-guided cancer treatment on test uptake. We found that the type and prognosis of cancer affected preferences for genomically-guided treatment. Our results also suggest that patients and the public have different perceptions about the value of various aspects of genomic testing to guide cancer treatment. Physicians may find these average preferences as a benchmark when providing treatment advice about pharmacogenomics testing to cancer patients. These preference weights also can be used to inform funding decisions by considering relevant populations’ valuation of different aspects of genomic testing.


  1. 1.

    Torpy JM, Lynm C, Glass RM: JAMA patient page. Cancer: the basics. Jama. 2010, 304 (14): 1628-10.1001/jama.304.14.1628.

  2. 2.

    Mullighan CG: New strategies in acute lymphoblastic leukemia: translating advances in genomics into clinical practice. Clin Cancer Res. 2010, 17 (3): 396-400.

  3. 3.

    Allison M: Is personalized medicine finally arriving?. Nat Biotechnol. 2008, 26 (5): 509-517. 10.1038/nbt0508-509.

  4. 4.

    Wolff AC: Liposomal anthracyclines and new treatment approaches for breast cancer. Oncologist. 2003, 8 (Suppl 2): 25-30.

  5. 5.

    Capdeville R, Silberman S, Dimitrijevic S: Imatinib: the first 3 years. Eur J Cancer. 2002, 38 (Suppl 5): S77-S82.

  6. 6.

    Najafzadeh M, Davis JC, Joshi P, Marra C: Barriers for integrating personalized medicine into clinical practice: a qualitative analysis. Am J Med Genet A. 2013, 161A (4): 758-763.

  7. 7.

    Rogausch A, Prause D, Schallenberg A, Brockmoller J, Himmel W: Patients’ and physicians’ perspectives on pharmacogenetic testing. Pharmacogenomics. 2006, 7 (1): 49-59. 10.2217/14622416.7.1.49.

  8. 8.

    Conti R, Veenstra DL, Armstrong K, Lesko LJ, Grosse SD: Personalized medicine and genomics: challenges and opportunities in assessing effectiveness, cost-effectiveness, and future research priorities. Med Decis Making. 2010, 30 (3): 328-340. 10.1177/0272989X09347014.

  9. 9.

    Grosse SD, Wordsworth S, Payne K: Economic methods for valuing the outcomes of genetic testing: beyond cost-effectiveness analysis. Genet Med. 2008, 10 (9): 648-654. 10.1097/GIM.0b013e3181837217.

  10. 10.

    Wittink MN, Cary M, Tenhave T, Baron J, Gallo JJ: Towards patients-centered care for depression: conjoint methods to tailor treatment based on preferences. Patient. 2010, 3 (3): 145-157. 10.2165/11530660-000000000-00000.

  11. 11.

    Sullivan R, Peppercorn J, Sikora K, Zalcberg J, Meropol NJ, Amir E, Khayat D, Boyle P, Autier P, Tannock IF, et al: Delivering affordable cancer care in high-income countries. Lancet Oncol. 2011, 12 (10): 933-980. 10.1016/S1470-2045(11)70141-3.

  12. 12.

    Louviere JJ, Hensher DA, Swait JD: Stated Choice Models. 2000, Analysis and Applications: Cambridge University Press

  13. 13.

    McFadden D: Econometric models for probabilistic choice among products. J Bus. 1980, 53 (3): S13-S29.

  14. 14.

    Lancsar E, Louviere J: Conducting discrete choice experiments to inform healthcare decision making: a user’s guide. Pharmacoeconomics. 2008, 26 (8): 661-677. 10.2165/00019053-200826080-00004.

  15. 15.

    Louviere JJ, Islam T, Wasi N, Street D, Burgess L: Designing discrete choice experiments: Do optimal designs come at a price?. J Consum Res: An Interdisciplinary Quarterly. 2008, 35 (2): 360-375.

  16. 16.

    Haga SB, Burke W, Ginsburg GS, Mills R, Agans R: Primary care physicians’ knowledge of and experience with pharmacogenetic testing. Clin Genet. 2012, 82 (4): 388-394. 10.1111/j.1399-0004.2012.01908.x.

  17. 17.

    Payne K, Fargher EA, Roberts SA, Tricker K, Elliott RA, Ratcliffe J, Newman WG: Valuing pharmacogenetic testing services: a comparison of patients’ and health care professionals’ preferences. Value Health. 2011, 14 (1): 121-134. 10.1016/j.jval.2010.10.007.

  18. 18.

    Issa AM, Tufail W, Hutchinson J, Tenorio J, Baliga MP: Assessing patient readiness for the clinical adoption of personalized medicine. Public Health Genomics. 2009, 12 (3): 163-169. 10.1159/000189629.

  19. 19.

    Bech M, Gyrd-Hansen D: Effects coding in discrete choice experiments. Health Econ. 2005, 14 (10): 1079-1083. 10.1002/hec.984.

  20. 20.

    Kontoleon A, Yabe M: Assessing the impacts of alternative ‘Opt-out’ formats in choice experiment studies: consumer preferences for genetically modified content and production information in food. J Agric Policy Res. 2005, 5: 1-43.

  21. 21.

    Train KE: Discrete choice methods with simulation. 2003, Cambridge University Press, 334-0-521-81696-3

  22. 22.

    Greene WH, Hensher DA: A latent class model for discrete choice analysis: contrasts with mixed logit. Transp Res B Methodol. 2003, 37 (8): 681-698. 10.1016/S0191-2615(02)00046-2.

  23. 23.

    Louviere J: What you don’t know might hurt you: some unresolved issues in the design and analysis of discrete choice experiments. Environ Resour Econ. 2006, 34 (1): 173-188. 10.1007/s10640-005-4817-0.

  24. 24.

    Griffith GL, Edwards RT, Williams JM, Gray J, Morrison V, Wilkinson C, Turner J, France B, Bennett P: Patient preferences and National Health Service costs: a cost-consequences analysis of cancer genetic services. Fam Cancer. 2008, 27: 27.

  25. 25.

    Hall J, Fiebig DG, King MT, Hossain I, Louviere JJ: What influences participation in genetic carrier testing? Results from a discrete choice experiment. J Health Econ. 2006, 25 (3): 520-537. 10.1016/j.jhealeco.2005.09.002.

  26. 26.

    Regier DA, Ryan M, Phimister E, Marra CA: Bayesian and classical estimation of mixed logit: an application to genetic testing. J Health Econ. 2009, 28 (3): 598-610. 10.1016/j.jhealeco.2008.11.003.

  27. 27.

    Herbild L, Gyrd-Hansen D, Bech M: Patient preferences for pharmacogenetic screening in depression. Int J Technol Assess Health Care. 2008, 24 (1): 96-103.

  28. 28.

    Marshall DA, Johnson FR, Phillips KA, Marshall JK, Thabane L, Kulin NA: Measuring patient preferences for colorectal cancer screening using a choice-format survey. Value Health. 2007, 10 (5): 415-430. 10.1111/j.1524-4733.2007.00196.x.

  29. 29.

    Sanderson SC, O’Neill SC, Bastian LA, Bepler G, McBride CM: What can interest tell us about uptake of genetic testing? Intention and behavior amongst smokers related to patients with lung cancer. Public Health Genomics. 2010, 13 (2): 116-124. 10.1159/000226595.

  30. 30.

    Bryan S, Jowett S: Hypothetical versus real preferences: results from an opportunistic field experiment. Health Econ. 2010, 19 (12): 1502-1509. 10.1002/hec.1563.

  31. 31.

    Mark TL, Swait J: Using stated preference and revealed preference modeling to evaluate prescribing decisions. Health Econ. 2004, 13 (6): 563-573. 10.1002/hec.845.

  32. 32.

    Woloshin S, Schwartz LM, Moncur M, Gabriel S, Tosteson AN: Assessing values for health: numeracy matters. Med Decis Making. 2001, 21 (5): 382-390. 10.1177/0272989X0102100505.

  33. 33.

    Reed Johnson F, Lancsar E, Marshall D, Kilambi V, Muhlbacher A, Regier DA, Bresnahan BW, Kanninen B, Bridges JF: Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force. Value Health. 2013, 16 (1): 3-13. 10.1016/j.jval.2012.08.2223.

Pre-publication history

  1. The pre-publication history for this paper can be accessed here:

Download references


We thank all participants who provided their opinions in the experiment, particularly lymphoma patients in British Columbia, who voluntarily accepted to be part of this research. Mehdi Najafzadeh is grateful for the support from CIHR (Fredrick Banting and Charles Best Canada Graduate Scholarship).

Financial support

This study was funded by Genome Canada/Genome BC.

Author information

Correspondence to Carlo A Marra.

Additional information

Competing interests

Authors have no financial or non-financial competing interests in relation with the content of this study.

Authors’ contributions

MN: Design, Data, Analysis, Interpretation, Drafting the Manuscript, and Final Approval; KJ: Design, Drafting the Manuscript, and Final Approval; SP: Data, Drafting the Manuscript, and Final Approval. JC: Conception, Data, Interpretation, Drafting the Manuscript, and Final Approval; MM: Conception, Drafting the Manuscript, and Final Approval; LL: Interpretation, Drafting the Manuscript, and Final Approval; CM: Conception, Design, Data, Interpretation, Drafting the Manuscript, and Final Approval. All authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Reprints and Permissions

About this article


  • Pharmacogenomics
  • Genomic medicine
  • Personalized medicine
  • Genetic testing
  • Discrete choice experiment
  • Conjoint analysis
  • Preference elicitation
  • Cancer treatment