Study design and data collection
A pioneering study that examined family caregivers' perceptions of the quality of care provided by community-based dementia care centers. A structured questionnaire is used to measure caregivers' satisfaction with service quality to assess the gap between perceived and expected quality. An onsite survey was conducted with them for 4 months (September–December 2019), at eight community-based dementia care centers in Taiwan. These centers were selected because they were under the guidance of the affiliated hospital. Potential participants were approached by a well-trained nurse, who outlined the purpose of the study, and invited them to participate in the survey. The participants were provided with a self-administered questionnaire after they gave their consent.
We recruited caregivers of patients with dementia at eight dementia centers in one city using the convenience sampling method. The sample size of each center ranges from 15 to 20, a total of about 155 samples. Inclusion criteria is participants in this study were FCs of patients with dementia, who were receiving LTC. Exclusion criteria were (1) unwillingness to participate after being fully informed, and (2) less than 1 month of use. Since variables of expected and perceived quality satisfaction were used as independent and dependent variables in the analysis, those who have incomplete data (more than half of the missing value are in each quality dimension) were then be excluded. In total, 125 questionnaires were distributed, and after eliminating questionnaires with incomplete data, 95 of these were included in data analysis.
Regarding the sample size for factor analysis, there are two major recommendations. These include samples with less than 100 samples should have a factor loading of no less than 0.50 [18] and the subject-to-variable ratio of at least 10 cases for each item in the instrument being used [19]. The effective sample size in this study is 95, which is the subject-to-variable ratio of 23 for each item, and the factor loading is over 0.7. We analyzed the data of 95 samples with statistical power.
Measures
The questionnaire was designed based on Service Quality (SERVQUAL) model; [20] this model is recommended as a good scale to use when measuring service quality [21]. Structure questionnaire was used to survey the FCs of dementia patients who were receiving LTC. It is divided into: (A) Demographic information—age, gender, education, marital status, and occupation; (B) Community care expectations and perceived performance was assessed using the expanded SERVQUAL Scale [20, 22]. The SERVQUAL scale evaluates five dimensions of community care service quality—tangibility, reliability, responsiveness, assurance, and empathy—using 20 items with a 5-point Likert response scale 1- strongly disagree and 5-strongly agree. The questionnaire demonstrated (i) adequate content validity indices (CVIs > 0.090) for all five dimensions as verified by 3 experts and (ii) adequate reliability through high internal consistency for all five dimensions (all Cronbach’s α > 0.90). The scale demonstrated adequate construct validity and acceptable reliability for this study.
Analytical framework
All data collected were statistical analyzed using SPSS version 22.0 (IBM Corp, Armonk, NY, USA). Descriptive statistics, chi-square statistics, Pearson correlation, t-test, and one-way analysis of variance (ANOVA) were used to analyze the data. The analytical framework consists of three steps:
Step 1: A penalty-reward contrast analysis (PRCA) [23]
A preliminary step in IRPA is the penalty-reward-contrast analysis (PRCA) which is a multiple regression analysis with dummy variables [23]. Due to the nonlinear effect, dummy variables were adopted to analyze the nonlinear relationship between the performance of quality attributes and customer satisfaction [24, 25]. The logistic regression was developed to describe the nonlinear relationship [26], to infer the odds ratio of customer satisfaction to customer dissatisfaction due to quality attribute performance, and to analyze the influence of quality attribute performance on customer satisfaction. Analyzing quality attributes by using quantified odds facilitates better understanding customer satisfaction.
For each quality attribute, two sets of dummy variables were created. In the first set, the lowest performance score was coded as “1” (if attribute = 1), and all other ratings were coded “0” (if attribute = 2, 3, 4, and 5). Conversely, in the second set, the highest performance ratings were coded as “1” (if attribute = 5), whereas all other ratings were coded 0 (if attribute = 1, 2, 3, and 4). These two dummy sets were then regressed on CS, which resulted in two unstandardized coefficients (penalty and reward indices) for each attribute [27]. These reward indices (RI) and penalty indices (PI) identify whether a service quality attribute plays a significant role in customer satisfaction or dissatisfaction respectively.
Step 2: An impact range-performance analysis (IRPA) [17]
The next step is to calculate the range of each attribute's impact on customer satisfaction by summing up the absolute values of the PIs and RIs. The sum of the absolute value of the penalty index (PI) and the reward index (RI) for each service quality attribute was used to evaluate the attribute's range of impact on customer satisfaction (RICS). Then, PI, RI and RICS were used to calculate scores of impact-asymmetry (IA) that quantified the extent to which an attribute had a satisfaction-generating potential (SGP) compared to its dissatisfaction-generating potential (DGP). According to Mikulic and Prebežac [11], the following equations were used:
-
1.
SGPi = Ri / RICSi … (1)
-
2.
DGPi =|Pi| / RICSi … (2)
-
3.
IAi index = SGPi – DGPi … (3)
in which:
-
1.
ri = reward index for attribute i;
-
2.
pi = penalty index for attribute i;
-
3.
RICSi =|Pi|+ Ri = range of impact on overall customer satisfaction; and
-
4.
SGPi + DGPi = 1.
A two-dimensional grid, divided into four quadrants, was constructed with the scores of an attribute’s range of impact on customer service (RICS) on the X-axis and mean values of attribute-performance scores (APS) on the Y-axis. The improvement-priority increases with larger RICS and lower APS [11].
Step 3: An impact-asymmetry analysis (IAA) [11, 28]
IAA is used to explore the key determinants of customer satisfaction/dissatisfaction among dementia care service quality attributes. By using grand mean values of IA (y-axis) and RICS (x-axis), the relative positioning of each attribute with the gridlines was provided IAA. Since IA is the arithmetic difference between SGP and DGP, IA can be used as a standard for classifying various levels of service attributes. For example, if an attribute had a positive value of IA, the attribute can be classified as a satisfier or delighter. In contrast, if the IA value of the attribute was negative, it is classified as a dissatisfied or frustrator. However, if the value of the attribute was close to 0, it can be classified as a hybrid because the attribute has little effect on customer satisfaction and dissatisfaction. The X-axis was divided into five parts, based on the degree of impact asymmetry on overall satisfaction: (i) “delighters” (Impact Asymmetry Index [IAI] > 0.4), (ii) “satisfiers” (0.4 ≥ IAI > 0.1). (iii) “hybrids” (0.1 ≥ IAI ≥ –0.1), (iv) “dissatisfiers” (–0.1 > IAI ≥ –0.4), and (v) “frustrators” (IAI < –0.4). In addition to using IA scores, RICS values are set as per the distribution of attributes: (i) “high-impact attributes” (RICStangibles > 0.57, RICSreliability > 0.73, RICSresponsiveness > 0.58, RICSassurance > 0.41, RICSempathy > 0.71); (ii) “medium-impact attributes” (0.45 < RICStangibles ≦ 0.57, 0.49 < RICSreliability ≦ 0.73, 0.38 < RICSresponsiveness ≦ 0.58, 0.26 < RICSassurance ≦ 0.41, 0.33 < RICSempathy ≦ 0.71); and (iii) “low-impact attributes” (RICStangibles ≦ 0.45, RICSreliability ≦ 0.49, RICSresponsiveness ≦ 0.38, RICSassurance ≦ 0.26;,RICSempathy ≦ 0.33) [12, 17].