- Open Access
Perceptions about data-informed decisions: an assessment of information-use in high HIV-prevalence settings in South Africa
BMC Health Services Researchvolume 17, Article number: 765 (2017)
Information-use is an integral component of a routine health information system and essential to influence policy-making, program actions and research. Despite an increased amount of routine data collected, planning and resource-allocation decisions made by health managers for managing HIV programs are often not based on data. This study investigated the use of information, and barriers to using routine data for monitoring the prevention of mother-to-child transmission of HIV (PMTCT) programs in two high HIV-prevalence districts in South Africa.
We undertook an observational study using a multi-method approach, including an inventory of facility records and reports. The performance of routine information systems management (PRISM) diagnostic ‘Use of Information’ tool was used to assess the PMTCT information system for evidence of data use in 57 health facilities in two districts. Twenty-two in-depth interviews were conducted with key informants to investigate barriers to information use in decision-making. Participants were purposively selected based on their positions and experience with either producing PMTCT data and/or using data for management purposes. We computed descriptive statistics and used a general inductive approach to analyze the qualitative data.
Despite the availability of mechanisms and processes to facilitate information-use in about two-thirds of the facilities, evidence of information-use (i.e., indication of some form of information-use in available RHIS reports) was demonstrated in 53% of the facilities. Information was inadequately used at district and facility levels to inform decisions and planning, but was selectively used for reporting and monitoring program outputs at the provincial level. The inadequate use of information stemmed from organizational issues such as the lack of a culture of information-use, lack of trust in the data, and the inability of program and facility managers to analyze, interpret and use information.
Managers’ inability to use information implied that decisions for program implementation and improving service delivery were not always based on data. This lack of data use could influence the delivery of health care services negatively. Facility and program managers should be provided with opportunities for capacity development as well as practice-based, in-service training, and be supported to use information for planning, management and decision-making.
The use of information to influence policy, program action and research has been described as a vital output of a functional health information system [1,2,3]; however, for information to be used, data must first be processed and analyzed into a usable format . More emphasis has been placed on data collection in low- and middle-income countries (LMICs) than on information-use. Limited information-use is in part related to the suboptimal quality of data generated by the routine health information systems (RHIS) [5,6,7,8,9,10,11], and the absence of a culture of information-use [12,13,14] defined as “the capacity and control to promote values and beliefs among members of an organization by collecting, analyzing and using information to accomplish the organization’s goals and mission” .
Substantial investments to promote and improve data-use have been made by organizations such as MEASURE Evaluation . Tools and interventions have also specifically been developed to encourage data demand and use in LMICs. For instance, Nutley and Reynolds created a logic model for strengthening the use of health data in decision making . Based on work by Aqil et al. , the Health Metric Network [1, 16], and Patton , this model involves the use of eight interventions that aim to improve data demand and use: assessing and improving the data-use context; engaging data users and data producers; improving data quality; improving data availability; identifying information needs; building capacity in data-use core competencies; strengthening organizations’ data demand and use infrastructures; and monitoring, evaluating, and communicating results of data-use interventions . This model has been applied in Ivory Coast to improve the use of data in decision-making, which increased information-use at the district level from 40% in 2008 to 70% in 2012, through building linkages between data collection and decision-making processes such as program review, planning, advocacy and policy development .
Despite increasing demand for and the availability of data, which are often not accurate and reliable [6, 7, 10, 19,20,21,22], many decisions made by health managers in terms of planning and resource allocation for managing HIV programs are still not based on data [13, 23, 24]. The inadequate use of information generated from RHIS is alarming, considering the investments made and resources channeled towards collecting such data. To highlight the magnitude of this problem, 120 countries globally were asked to self-rate their performance in terms of information-use in HIV program planning, management, and implementation, using a five-point Likert scale from 0 (low) to 5 (high). More than half of the countries surveyed by the United Nations’ General Assembly (UNGASS)’ National Composite Policy Index data focusing on addressing monitoring and evaluation (M&E) systems, rated their data-use experience as below average. Only 48% of the 42 countries in sub-Saharan Africa, including South Africa, rated their data-use experience in HIV planning and implementation as above average .
In South Africa, almost 16 years after the implementation of the district health information system (DHIS), the major source of primary health care (PHC) information, data from the system have not been adequately utilized to effectively inform policies and service delivery , leaving data producers with the perception that data collection is only for reporting purposes. Reviews of the prevention of mother-to-child transmission of HIV (PMTCT) program have highlighted challenges such as the lack of data to monitor progress across the PMTCT cascade [21, 27]. In the absence of accurate and reliable data, it is difficult, if not impossible, to make evidence-based resource-allocation decisions to increase the efficiency and effectiveness of the PMTCT program. The non-use of information for planning may pose a considerable challenge for policy makers in terms of decisions about investment in HIV prevention and treatment . It is therefore important to use data to identify whether the interventions are effective in curtailing new HIV transmission, as well as averting new deaths in children and their mothers.
The insufficient use of data for HIV prevention has an impact on the supply of treatment, illustrated by a recent national crisis of antiretroviral (ARV) stock-outs in South Africa, which threatened the lives of thousands of patients and undermined efforts to fight HIV . This crisis could have been averted with proper planning and budgeting informed by data from PHC facilities. (Pillay Y, Rohde J, Van den Bergh C: The District Health Information System: a briefing document for Department of Health managers, unpublished) suggest that, despite the routine collection of data in all South African public health facilities, very little of the information is used by decision-makers. The literature highlights several reasons for the insufficient use of information for planning and management purposes. These include the lack of a reliable data source ; lack of skills for data interpretation and utilization ; problems relating to the data-collection processes such as poor-quality data; and, incomplete and untimely data .
There is paucity of data in South Africa about barriers to using information for management purposes. This study sought to investigate the use of information, and barriers to using routine health information for program monitoring in the context of maternal and child health programs in two high HIV-prevalence districts in South Africa.
Study design, setting and participants
We undertook an observational study using a multi-method approach, including an inventory of registers, monthly reports and meeting minutes at each of the 57 selected health facilities in two districts with high HIV prevalence, using the performance of routine information systems management (PRISM) diagnostic ‘Use of Information’ tool [2, 3]. The facilities selected were 27 heath facilities in the Amajuba District (KwaZulu-Natal Province) and 30 in Khayelitsha and the Eastern Sub-districts in the Cape Metro/City of Cape Town District (Western Cape Province). The study setting and sampling have been extensively described elsewhere [7, 33]. We conducted in-depth interviews with 22 purposively selected health-service personnel between July and December 2013 to explore their experiences regarding PMTCT data production and use, and their perceptions about information-use barriers. Participants were selected based on their positions and experience with either producing PMTCT data and/or using data for management purposes. The key informants included district managers/coordinators, sub-district coordinators, PMTCT program managers, M&E officers, facility managers (FMs), and staff involved in data collection (Table 1).
Data collection and analysis
Quantitative: Promotion and use of information
Two teams of two trained fieldworkers each reviewed data from 228 reports in all 57 facilities for evidence of data-use measured using two criteria: (1) the availability of reports such as feedback, quarterly reports, health services, minutes of meetings; and, (2) reviews of these reports for evidence of information-use. A set of weighted elements used to calculate this indicator include: RHIS report production; frequency of RHIS reports; types of reports produced; display of information at the facility level; use of information in available reports at facility; types of decisions based on types of analyses; discussion and decisions based on RHIS information; and, promotion and supervision by the district office. Furthermore, the promotion of information-use at the facility level indicates whether mechanisms and processes that encourage information-use are in place. These were captured and analyzed using simple descriptive statistics, such as measures of frequency and measures of central tendency.
The principal investigator and a research assistant who had completed training in qualitative data-collection processes conducted the individual interviews in English, each session lasting 20–45 min. An interview schedule was developed with a list of predetermined sets of questions and themes on data quality, information-use, training, skills and information flow. This schedule served as a checklist to ensure the same questions were administered to all key informants in order to elicit systematic responses that could be easily categorized and analyzed. A trained researcher transcribed the digital audio recordings from the interviews. The transcripts were proofread and repeatedly read, first by the research assistant, and later by the lead author, who compared the transcripts to the original audio recordings. Errors were then corrected.
We analyzed the qualitative data using a general inductive approach based on the techniques of systematically identifying emerging themes, categories, or patterns from the data . The lead author and an experienced qualitative analyst independently coded responses from all participants by examining the transcripts, and identifying similar emerging themes. These independent analyses were then compared for consistency and discrepancies were identified through critical evaluation of the sets of themes. The source quotes were reviewed and agreed upon, and a final thematic report was generated from the combined analyses. Results were then shared with program managers and staff at the district level for validation and the findings were revised based on feedback from the managers. Direct quotations are used to illustrate key issues and themes.
The interviews were complemented by observations at facility level, which included observing the availability of any form of reports such as wall charts, quarterly reports, feedback reports or health service reports. Field workers were required to observe the display of maternal and child health-related information, in the form of graphs, charts or tables at each study facility, and to verify if the displayed information was updated. These were collated and compared across facilities.
The percentage distribution of information-use at the facility during the 3 months prior to the study is presented in Fig. 1 and includes a district-level indicator, ‘Promoting use of information’, to compare against ‘Information-use’. The study shows that whereas about two-thirds of the facilities reported having mechanisms and processes to facilitate information-use (‘Promoting use of information’), evidence of information-use (i.e., indication of some form of information-use in available RHIS reports) was demonstrated in 53% of the facilities.
Despite the availability of RHIS reports in 98% of the facilities, only half used the information in these reports. While 82% of the facilities claimed they kept official records of meetings held during the 3 months preceding the survey, only about two-thirds discussed RHIS findings such as patient utilization, disease data, service coverage or medicine stock-outs in meetings.
An overview of elements assessed for the promotion of information-use is presented in Fig. 2. In over 90% of the facilities surveyed, managers reported having received annual/monthly planned targets from the district offices that were based on RHIS information, and that they had participated in meetings at the district level to discuss RHIS performance over the 3 months prior to the survey. However, managers reported that documentation showing information-use for advocacy purposes, and examples from the district offices of how RHIS information had been successfully utilized in the past, were not available in more than half of the surveyed facilities.
Figure 3 shows the distribution of information-use for specific decisions in available reports at the facility. Despite 82% of the facilities having reported the use of RHIS information to review strategies for achieving performance targets, only about half actually used the information to make decisions on resource mobilization based on a comparison of services. However, two-thirds of the facilities reported having used the information to advocate for additional resources.
The survey also assessed the display of RHIS-related information that had been updated at least 6 months prior to the survey. Facilities were observed for any type of information on maternal and child health services, including information on facility utilization, population by target groups, and the presence of a map of the facility catchment area. About 60% of the facilities displayed RHIS information, but only in half of the facilities was the displayed information updated (Fig. 4). Information on maternal and child health was displayed in more than two-thirds of the facilities, while about 60, 47, and 50% of the facilities had maps of their catchment areas, displayed information on disease surveillance, and information on facility utilization, respectively.
Barriers to information-use
The perceptions of health information personnel about barriers to information-use for planning purposes were explored using a qualitative approach. The guiding questions used to elicit information during data collection were informed by the Information Cycle Model  and the Health Metrics Network Framework . Several themes and categories relating to information-use, and reasons why information is insufficiently used for monitoring purposes were identified. Table 2 presents a summary of the general themes and categories about data collection and use of information relating to the experiences of staff at different levels of the health information systems. Four themes (use of information, barriers to information-use, data-quality issues, and reasons for poor data collection), four sub-themes (human resources, equipment, validation issues, and training), and 39 categories relating to information-use and barriers to using information for monitoring purposes emerged from the data. Some of the themes are similar across the cadres of health professionals and have been categorized accordingly. We present in detail with illustrative quotes some of the themes observed across the different cadres interviewed.
There were mixed feelings among participants about information-use for managing programs and service delivery across all levels. Despite concerns about data inaccuracy, district health managers (DHM), which includes district and program managers, confirmed using the data since it gives a sense of what needs to be addressed.
Participants agreed that the information had been selectively used, either for monitoring outcomes, political purposes, reporting to the next level, or for publicity and/or campaign purposes. One provincial manager claimed that data were widely used for measuring program output:
“I report on about five different platforms, most of the PMTCT data ... lots of the information I use for reporting purposes, many of them measuring output and program performance … and then in terms of business plan and budget there’s some of them where, especially feeding options, whether we need to procure less or more milk …” (DHM #2)
However, there were concerns about the lack of information-use by senior management to drive programs, improve service delivery, and inform decision-making for developing action plans. Some ascribed their lack of commitment to health-information tasks to the absence of evidence that the information was used to address problems such as lack of resources and patients’ access to care. Some of the managers saw data collection as a waste of time, as the data were not used. For example, one district manager stated that:
“They use the figures as to say our program is doing very well ... but besides looking at the data what else are they looking [at]…? Do they talk to the data as well?...We have HH and MK hospitals that are in the middle of nowhere. There are no taxis going [in] that direction… What is the government doing about that? Are they providing any transport… for patients?” (DHM #14)
A different dimension to information-use was highlighted by a district manager, who argued that program and facility managers are often caught up in managing problems without first analyzing the source of the problem.
“…Most managers are [so] busy managing problems that they don’t have enough dedicated time to sit down and plan with the data. I think there is not enough attention paid to it and often decisions are being made in a way that does not consider the important little aspects required when collecting data and I think it’s a lack of maybe understanding people when it comes to managing a facility.” (DHM#12 )
Barriers to information-use
Some of the key barriers to data-use, which emerged, included a lack of trust in the quality of data; a lack of understanding and skills to interpret and use data; information packaging; lack of accountability for accurate data; timeliness of feedback on information, and, a poor information-use culture.
Lack of trust in the information collected
Almost all the participants highlighted trust as a major reason for non-use of information. A general consensus was that most managers did not bother to use the information because they did not trust the data captured from the registers.
“… There are many managers out there; they will very proudly tell you that they don’t bother with the information because it is just rubbish and they are very proud of it ... I wouldn’t trust the information at all … When you look at the information with a magnifying glass, you see many holes. … I am very confident that we are deluding ourselves.” (DHM #12)
Despite lack of trust in the data, some district health managers claimed that working with poor data was better than not using it; and that one of the ways of improving data quality was to use the data. As indicated:
“… I feel that even when the information is not good, the only way to make it good is to use it. I am not waiting for the information to be good to use it, because it is never going to be good if you don’t use it. So using poor information that is unreliable and undependable, is always the starting point; it is only in the process of analyzing it, making sense of it and giving feedback to the people who produced it, so that they understand what the data is used for, that you really get information to be good.” (DHM #12)
Lack of understanding and skills to interpret and use data
The general perception was that personnel, especially FMs, lacked the skills to understand and interpret data for planning and that information was reported without being interrogated to check if it was usable. Participants emphasized the importance of strengthening FMs’ understanding of the rationale for collecting data, stressing that once they understood the importance of data, more attention would be given to ensure that data signed off and sent out of the facility was of good quality. The following remark demonstrated the issue of lack of skills:
“… if I do not understand the data myself, I don’t know what the statistics are saying. There is no way I can use it. So I think it’s probably a lack of skills on the part of the CEOs. They don’t know what these figures represent …” (DHM #6)
The issue of professional nurses’ inability to interpret data, as a result of their lack of numeracy skills, was also highlighted. Participants claimed that this issue had nothing to do with color or race. The problem was even more pronounced because it is mostly professional nurses who function as FMs and are expected to make decisions about patient care and oversee the day-to-day running of the clinics.
“I think that a lot of people are slightly nervous of figures. There’s a kind of numeracy mental block …” (DHM #11)
“People don’t even understand what a percentage is. … I remember giving feedback to nurses, professional nurses. These were not even from Khayelitsha, so it is not like ‘oh the blacks have got bad mathematical skills’ not at all. … I will say 80% and they will say ‘what do you mean 80%? We only saw 19 patients ...’ So they thought I was saying they saw 80 patients.” (DHM #12)
Narratives also suggest that the lack of skills to understand and interpret data stemmed from a poor information-use culture. Participants also attributed the inability to use information to a lack of training of managers, especially FMs to use data; they therefore just collected and transmitted data to the next level.
“We give the information to the senior managers. They are able to use the information for planning. People believed that you collect information; you send it to district, to province, national, then national will see what they want to do with that information.” (DHM #6)
Nonetheless, district and program managers argued that FMs needed to acquire the necessary skills to use information, and to ensure that the data produced are validated and of good quality, before such data are signed off. They indicated that FMs should be held accountable for decisions made based on invalid information.
“We need to capacitate managers on the ground on how to use that data. And then to ensure that all the managers … actually sign off the data to ensure it is valid, and to hold managers accountable for decisions taken.” (DHM #10)
Packaging of information
The way information was packaged was of vital importance in determining whether it was used or not. Participants argued that most staff at district and facility levels were not numerically inclined, and were not used to figures and numbers, therefore extra effort was required to interpret the information. They suggested that information should be presented by M&E officers and program managers in a simple, easy-to-understand, and usable format.
“I think that a lot of people are slightly nervous of figures. There’s a kind of numeracy mental block … you have to find a way to package the information, in [a]digestible way, like sugar coat it almost. You know I think a lot of people aren’t used to looking at data … we will be given all the provincial data and then there’s just these Excel spreadsheets and there are numbers everywhere ... but the M&E manager sort of packaged it in a way that she’d show us these graphs … we had someone who could distil and interpret and give it to us, so if anyone asked practically anyone in the program, nurses, counsellors … they could tell you [interpret the data].” (DHM #11)
Lack of accountability for accurate data
Most of the district managers were concerned that staff at the facility level were nonchalant in the way they performed their daily duties, owing to lack of supervision and accountability. They stated that processes to monitor data accuracy should be put in place at all levels, suggesting that all managers be held accountable for ensuring that data were validated on a monthly basis before being signed off.
“I think managers need to take responsibility and be accountable for their districts and for their sub-districts. And making sure that what they are signing off on a monthly basis is correct …” (DHM #1)
Data-quality issues relate to participants’ perceptions of the quality of PMTCT data generated at the facility level. This theme was categorized into four sub-sections: perceptions of poor-quality data; staff at different levels involved in several registers for one program; inaccurate information provided by patients; and, cooperation between staff at facilities.
The perception of poor-quality data
All study participants were concerned about the quality of data they received from the facility and agreed there were problems with the data-collection processes.
“A major problem which we actually found was that there were about six discharges in the pediatric ward, and maybe I don’t know whether the lady (nurse) was drowsy or something, she will put that [sic]six under deaths; you see now that’s a HUGE problem.” (FM #5)
Incorrect information provided by patients
Participants expressed their concern about the nature of information received from patients, claiming that patients sometimes gave wrong information to nurses during their clinic visits. The issue here was that if the information fed into the system was inaccurate from the source, this would cause a ripple effect right through the system to the national level, and no amount of validation would solve the problem.
“I don’t know why people lie about where they live ... that is one of the barriers, people losing documents due maybe to a fire or being robbed or misplacing them completely ... we have foreigners, quite a number of Somalis, Ugandans and Kenyans coming in and there is a language barrier at times.” (DHM #5)
“... data collection I would say is quite problematic; what we need to focus on is the source where data is initiated, where data moves from two different points … if the data that you put in there is wrong, then even the system will give you wrong information at the end of the day. It goes a very long way.” (FM#7)
Another issue raised relates to the nature of the recorded data. A district manager expressed concerns that the issues surrounding poor-quality data were multifaceted. The manager argued that if nurses had the skills to check data accuracy, they should have been able to detect wrong information, stressing that even when the information from the client was correct, they sometimes cannot make meaningful sense of the data.
“I don’t think it is one thing. I think what happens in the facilities contributes ... they [clients] still sometimes bring wrong data, and then that wrong data is captured in the system. But also, correct data is captured and it says other things, and also if wrong data is brought, someone that is capturing should be able to see that this is abnormal.” (DHM #5)
Staff at different levels involved in several registers for one program
Program coordinators and managers of the PMTCT program, on the other hand, highlighted challenges with how the program was managed. They felt that one of the challenges for data quality was that the PMTCT program was managed by different people at different levels, and involved multiple data-collection tools. Data for the PMTCT program were extracted from three registers managed by different staff. At times the registers are simultaneously used by different nurses to record care provided to patients. When a nurse is busy with one register, the others improvise and find other means of recording the information, such as on pieces of paper for onward transmission to the registers. Sometimes the nurses forget to capture this information onto the registers, or sometimes misplace the loose sheets, hence under-reporting the services provided.
“There are issues at different levels of data collection and reporting ...different people on the ground are responsible for different registers and, in fact, for some registers, you need more than one provider [nurse]. So, for example, the HCT register is done [captured] by four counsellors and they also need the nurses to provide some of the information. And so I think one of the problems is that they don’t actually work in the registers, they have bits of paper and they put the various bits together in the register later on… but there’s room for error obviously if one person’s piece of paper gets [sic] missing…” (DHM #11)
Reasons for poor-quality data
When asked about their experience with producing data, participants agreed that the quality of data they produced is suboptimal; however, they gave reasons for this. These were categorized into human resources, equipment and validation issues.
Human resources: Staff are not skilled and equipped to record data
There was a consensus that lack of manpower and basic capacity/competence for recording and validating data were the most burning issues. In this regard, FMs strongly emphasized the following human resources issues as some of the reasons that may impede data quality: staff attitude; human error; staff turnover; lack of numeracy skills; insufficient feedback and supervision; and, lack of ownership.
Staff attitude toward performing RHIS tasks, was emphasized as a factor hindering the collection of good-quality data. Concerns were raised about the nonchalant attitudes staff displayed when it comes to data collection. Participants claimed that some staff willfully refused to tick off relevant data, and did not bother to fill in the registers, and when they did, they sometimes did not do it properly or comply with the guidelines.
“... some clinicians either forget or willfully don’t tick off the relevant data ... it could be there is staff attitude, like someone saying ‘it’s not my job kind of thing, it’s someone else’s’.” (DHM#9)
“The staff don’t record properly ... They don’t comply with the tally sheet ... And sometimes we don’t get enough time to check thoroughly and then I sign them and the statistics goes [sic] to the next level and you find that there are mistakes which were overlooked ... sometimes other registers are not counted by mistake, which means we don’t count everything that we have done for that month.” (FM #8)
Human error in data collection
Human errors such as forgetfulness, absentmindedness, and carelessness related to issues of staff attitude, were identified as reasons for poor data quality at the facility level. FMs claimed nurses often forgot to record the care given and mistakenly recorded wrong values. The views of some of the participants are presented below:
“I don’t think nurses tick every patient that comes in for care; sometimes they will just dish out the tablets for those who come for their doses and then maybe after four or five patients, they will remember to tick and suddenly they just tick twice. Sometimes they may tick less or tick more, so it’s difficult to keep accurate stats.” (FM #1)
“Sometimes they [nurses] forget what to put where, when they do data collection. Then there are also human errors of course, you count 10 people, and then write 11 by mistake …” (FM #5)
Lack of feedback and limited supervision
Supervision and feedback are important issues related to human error and staff attitude discussed above. Participants argued that through feedback, staff were able to identify areas of their outputs that needed improvement. Despite the importance of feedback, it is often not provided to staff at the facility level.
“I don’t think there’s nearly enough feedback to people on the ground … it’s taken us many months ourselves to sort of work out how the data flow works … I’m seeing more and more the importance of giving feedback … but I don’t think it happened very much at all.” (DHM #11)
In addition, the frequency and quality of supervisory visits were also highlighted as a reason for poor-quality data. One FM expressed her concerns about the quality of supervision they receive at the facility level. She argued that supervision is not always adequate, and suggested that supervisors should be more involved in their supervisory activities, rather than just checking a few folders that may not reflect the true nature of what is happening at the facility.
“During the supervisory visits, sometimes the program manager will pull a few folders to look at what we are doing, but it does not give the true reflection because two folders cannot give you the true picture ... because the folders might all be correct.” (FM #1)
Work overload and staff turnover
Further reasons raised by participants for poor data quality are problems relating to work overload and staff turnover. Managers at different levels argued that it is extremely difficult to produce accurate data with too few staff. They highlighted the challenge of inadequate capacity and manpower to carry out tasks in an already overburdened health system.
“It is difficult to keep accurate statistics, because the same person who is taking the statistics is the same person who has to give the tablets, it’s the same person who has to find the folder, the same person who must listen to the entering patient, you see. So I don’t think it gives the true reflection.” (FM #1)
Work overload at the facility level relate to issues of staff shortages and staff rotation, which managers believed were major factors influencing the quality of data produced. Managers asserted that the high staff turnover experienced at both the sub-district and facility levels created a demand for training new staff. Staff turnover was also considered to contribute to situations where data are left uncaptured for months because new staff often lacked the skills and experience to perform HIS-related tasks.
“In certain districts you have quite a high staff turnover which makes it very difficult, and even though we have a lot of people at district level, you can only train every X amount [sic] of weeks or months ... Again staff turnover, even at sub-district level with data capturers, there’s often staff turnover; that is why for a month or two the data will be in shambles like now, and then it’s maybe just a new person who had a totally different idea or didn’t understand.” (DHM#2)
Furthermore, managers argued that because of staff shortages, people not only multi-task, but are asked to perform tasks they are neither qualified nor trained for.
“… majority of the facilities don’t have information clerks and now you have to rely on the facility managers sometimes to provide the reports and they also have a lot that they need to do and sometimes the report comes late. It’s not always on time and there are gaps and so on.” (DHM#13)
Lack of numeracy skills and skills for recording and analysis
Our study revealed that most clinicians were unprepared and lacked the necessary skills for data collection at the facility level. A major issue highlighted was clinicians’ lack of understanding of the definitions of some data elements and indicators. Participants claimed that some clinicians did not have proper training on the definitions of the PMTCT data elements and how the elements should be recorded as reflected in the PMTCT protocol.
“The biggest problem is on the definitions ... some clinicians are not clear with the definitions on the routine monthly report data … a lot of the times the clinicians and nurses are just expected to collect data. They haven’t really been inducted into what are the elements, and what are the definitions.” (DHM#8)
Lack of numeracy skills also emerged as a reason for poor data quality. Despite the importance of numeracy skills in the data-collection and analysis processes, the majority of the staff involved with data collection, such as clinicians, nurses and data capturers, were reported to have inadequate numeracy skills.
“I even asked the clinic nurses to come and learn arithmetic, you know the percentage, because we have to do percentages. I arranged with Damelin College because people couldn’t count … but they did the maths and they said they were bored … I nearly went crazy. A child was weighing 4.5kg and when one is having diarrhea, we use the formula of 20ml per kg. Sisters couldn’t calculate that; they would calculate with 5kg, make it to the nearest! You may be endangering the child’s life by given him an overdose.” (DHM #7)
“And then there’re some of us who are not all that gifted with numeracy skills, but you know, I think I have a pretty good grasp of it. So I think those are the main challenges.” (DHM #10)
Another challenge highlighted, also related to skills, was that data capturers/support clerks are not adequately trained to perform their tasks. Even when training was provided, staff reportedly still did not get it right.
“At the SH clinic the problem was that the person in the baby clinic was not even a professional nurse. So it was a staff member who wasn’t adequately qualified anyway to do the job, and he certainly didn’t understand the register. … In the facilities the staff had never had any training on the use of registers, … and there was actually a new baby follow-up register, and they didn’t fully understand how to use it…” (DHM#11)
“I give them training on how to record several times but you know because of this shortage of staff sometime people are just rushing things and then sometimes they think that recording is just waste of their time.” (FM#8)
One district manager pointed out that a major reason why staff did not benefit from training was the caliber of the people trained in HIS tasks. She argued that most of the trainees are not even qualified to participate in the training.
“… I was so upset when I learnt about people that were actually trained to be data capturers; one was working in the kitchen, the other one was working there as a cleaner, and I was worried that anyone can just be pulled to do data capturing …” (DHM #7)
Lack of skills for validating and analyzing data
Despite the importance of data validation in ensuring good-quality data, the study shows that staff at district and facility levels reportedly lacked the basic skill to validate data. One FM argued that if she did not have the skills to tell if data were correct or not, no one else in the facility would be able to detect problems with the data.
“I don’t have skills to sit down and say that and be confident that this is what it is ... [there is] nobody with the skill in the facility.” (FM #1)
“You know it’s quite embarrassing actually … [it is] confusing to look at that PMTCT document and validate it, especially when there are sometimes confusing figures ... since I cannot tell whether this is correct or not, my data capturers will never know what is right and what is wrong.” (FM #5)
Training for data collection and capturing/validation
Although the majority of the participants regarded training for recording and validation as necessary for improving data quality, the general agreement was that the structure, content and mode of delivery of such training should be based on adult-learning principles, which should be interactive and experiential in nature. They suggested that the preferred option should be in-service training that is relevant, participatory and learner-centered. However, district managers expressed their frustration about staff behavior during training.
“I’m tired of training. I would prefer on-the-job training. I believe… [it] is more meaningful than a classroom session whereby they are just happy that they are out of their workplaces. Immediate support, even if it could be a central training … at a facility level…, because it is practice that will make you perfect … I do think that it is very important to strengthen people’s understanding of the reason why data is collected.” (DHM #7)
“My experience is that no matter how much training they have people don’t really listen; they only hear what they are expecting to hear …” (DHM #12)
Despite an increasing demand for data, and the investments and efforts made to collect routine facility-based data, the majority of decisions made by health managers, in terms of planning and resource allocation for managing HIV programs, are not based on data. This study has shown a limited use of data at the facility level for decision-making purposes. Even though about 70% of the surveyed facilities have mechanisms and processes in place to promote information-use, only about half have evidence to show that information was adequately used for the day-to-day management of maternal and child health programs. In spite of the availability of reports in almost all the surveyed facilities, only half claimed to have used the information for advocacy purposes; for mobilizing resources; for adjusting personnel; or for reviewing strategies to achieve performance targets.
The qualitative component of this study concurs with the findings from the facility inventory. Managers across the board confirmed that information was selectively used at the facility, district and provincial levels; however, the study revealed that information is not used for decision-making and planning purposes, but for reporting and monitoring of program outputs such as the proportion of HIV-positive pregnant woman who received Nevirapine prophylaxis. These findings support other studies on information-use [13, 23, 25]. For instance, Peersman et al. highlight the extent to which information is used for planning, management, and implementation of HIV programs. The authors noted that more than half of the 120 countries surveyed globally, self-rated their performance in terms of data-use as below average .
A major reason highlighted in this study for the suboptimal use of information is the inability of staff at the facility and district levels to analyze, interpret and use data. This contradicts the expectations of the South African National Department of Health, outlined in the current District Health Management Information System (DHMIS) standard operating procedures (SOPs) . Despite the inclusion of these tasks in the SOPs as key responsibilities of facility and program managers, these activities have not been enforced. According to the SOPs, FMs and professional nurses are not only expected to use data, but also to have the skills to analyze and interpret data. The inability of personnel at both the facility and district levels to use information would imply that effective training and support interventions for data-use have not been fully implemented. These results support other studies that highlight the inadequacies of managers to analyze, interpret and use data [32, 36,36,38]. Burn and Shongwe’s telephone survey of 27 hospitals, highlight the insufficient use of information by hospital managers to manage service delivery at the hospital level . The authors attributed these inadequacies to the managers’ lack of skills to use data. Since most managers lack these skills to analyze and interpret data, senior management, including M&E managers should package information in a format that allows easy interpretation, to encourage increased information-use amongst managers.
The in-depth interviews identified several barriers to information-use for planning and management purposes, which stem from human-related and organizational issues such as the lack of trust in the data (a situation where the data integrity is questionable), and a lack of a culture of information-use at facility and district levels. The general perspective in this study was that most managers did not use information because they did not trust the data. The lack of trust hinges on inaccurate and incomplete reporting influenced by human, organizational and technical factors. These factors include the lack of core competencies for data collection and analysis [13, 31, 38,38,40]; staff shortages and work overload ; staff attitudes towards RHIS tasks; shortages of basic paper-based resources like registers, tally sheets and stationery [13, 40, 41]; and the lack of information-use culture [5, 12,13,14]. Unlike Burn and Shongwe , Ledikwe et al. and Chaulagai et al. considered additional barriers to information-use such as the lack of data ownership which also emerged from this study [14, 42]. These findings, which support the proposals by Nutley and Reynolds  to strengthen managers’ capacity to use information, are consistent with studies by Solarsh and Goga  and Mate et al. , who argue that the lack of reliable data impede the tracking of progress on health-service delivery.
Regardless of the data-trust issues, suggestions have been made to use poor data. Participants argued that working with poor data is one of the ways of improving data quality, since it is only by analyzing and making sense of the data that one is able to provide feedback to data producers on how to improve data quality. This proposal conforms with the data-use approach intervention suggested by Braa et al. . The intervention involves quarterly data-use workshops for health management staff, where routine data are presented and critiqued. This approach has been successfully used in Tanzania  and Rwanda  to improve data quality, data transmission, data integrity and data-use. Nutley et al., have used the logic-model intervention to improve the use of data in decision-making in Ivory Coast . A data-collection and feedback training intervention has also been used to improve the quality of routinely collected data in South Africa .
Another significant finding of this study is the lack of a culture of information-use for decision-making. This study found an insufficient information-use culture at the surveyed facilities and inadequate evidence of information-use by senior management for planning and decision-making. If there is no demand for information from senior management, and staff do not understand the usefulness of information, chances are FMs will not use information. This result supports findings from other studies [1, 4, 12, 46] that report on the culture of information-use. Kamadjeu et al. highlighted leadership issues as one of the challenges faced in the adoption and sustainability of an electronic health care record in Cameroon. They observed a 50% dropout rate among users of the system, attributed to a lack of support of the new technology by the new management staff, which did not promote a culture of use . Lorenzi et al. have also argued that the success or failure of an organization, irrespective of the management techniques, depends on the leadership . Taylor  proposed three main factors for effective adoption of culture change: behaviors - of those we admire, or perceive as role models; symbols - such as who gets promoted; and systems - such as rewards, and punishments. Taylor argued that it is fruitless to try to change individuals’ behavior without first changing the culture of the environment in which they live. In other words, if top leadership promotes a culture of information, personnel are likely to emulate them [48, 49].
This study shows that information is inadequately used by health managers to inform decisions and for planning purposes, but is selectively used for reporting and monitoring program outputs at the provincial level. The inadequate use of information has been attributed to several behavioral and organizational barriers, which include a lack of trust in the data; the inability of many managers to analyze, interpret, and use information, owing to insufficient skills and, the lack of or an inadequate culture of information-use at the district and facility levels.
The inability of facility and program managers to use information implies that decisions for program implementation and improving service delivery are not always based on data. Limited information-use could have a negative impact on service delivery. Hence, facility and program managers should be provided with practice-based, in-service training, that is participatory and learner-centered, and be supported to use information for planning, management and decision-making. Furthermore, a culture of information-use at all levels needs to be promoted by senior management using proven data-use interventions. Capacity in information-use competencies such as data analysis, interpretation, synthesis and presentation, and efforts to improve data quality should be strengthened. Further investigation is needed to determine how decisions for planning and evaluating key programs such as PMTCT are made, and what informs such decisions if not the data.
Chief Executive Officer
District Health Information System
District Health Manager
District Health Management Information System
District information officer
Facility information officer
HIV/AIDS, STI & TB coordinators
HIV counselling and testing
Health information officer
Human Immunodeficiency Virus
Low- and middle-income countries
Monitoring and evaluation
Primary Health Care
Provincial information officer
Prevention of Mother-To-Child Transmission of HIV
Routine Health Information System
Standard operating procedures
Stellenbosch University Rural Medical Education Partnership Initiative
United Nations’ General Assembly
World Health Organization. Health metrics network: framework and standards for country health information systems. Geneva: World Health Organization; 2008.
Aqil A, Lippeveld T. PRISM tools for assessing, monitoring and evaluating RHIS performance. MEASURE evaluation project; 2010. Available from: http://www.cpc.unc.edu/measure/tools/monitoring-evaluation-systems/prism. Accessed 04 Apr 2016.
Aqil A, Lippeveld T, Hozumi D. PRISM framework: a paradigm shift for designing, strengthening and evaluating routine health information systems. Health Policy Plan. 2009;24(3):217–28. https://doi.org/10.1093/heapol/czp010.
Nutley T, Reynolds HW. Improving the use of health data for health system strengthening. Glob Health Action. 2013;6:20001. https://doi.org/10.3402/gha.v6i0.20001.
Aqil A. PRISM case studies. Strengthening and evaluating RHIS. Chapel Hill: Carolina Population Center, University of North Carolina. MEASURE Evaluation; 2008.
Mate KS, Bennett B, Mphatswe W, Barker P, Rollins N. Challenges for routine health system data management in a large public programme to prevent mother-to-child HIV transmission in South Africa. PLoS One. 2009;4(5):e5483. https://doi.org/10.1371/journal.pone.0005483.
Nicol E, Dudley L, Bradshaw D. Assessing the quality of routine data for the prevention of mother-to-child transmission of HIV: an analytical observational study in two health districts with high HIV prevalence in South Africa. Int J Med Inform. 2016;95:60–70. https://doi.org/10.1016/j.ijmedinf.2016.09.006.
Nicol E, Bradshaw D. Maternal, newborn and child survival: data challenges. In: Fonn S, Padarath A, editors. South African health review. Durban: Health Systems Trust; 2010. p. 73–8.
Ferguson L, Grant AD, Ong'ech JO, Vusha S, Watson-Jones D, Ross DA. Prevention of mother-to-child transmission of HIV: assessing the accuracy of routinely collected data on maternal antiretroviral prophylaxis coverage in Kenya. Sex Transm Infect. 2012;88(2):120–4. https://doi.org/10.1136/sextrans-2011-050220.
Gimbel S, Micek M, Lambdin B, Lara J, Karagianis M, Cuembelo F, Gloyd S, Pfeiffer J, Sherr K. An assessment of routine primary care health information system data quality in Sofala Province, Mozambique. Popul Health Metr. 2011;9(1):12. https://doi.org/10.1186/1478-7954-9-12.
Glèlè Ahanhanzo Y, Ouendo E-M, Kpozèhouen A, Levêque A, Makoutodé M, Dramaix-Wilmet M. Data quality assessment in the routine health information system: an application of the lot quality assurance sampling in Benin. Health Policy Plan. 2015;30(7):837–43. https://doi.org/10.1093/heapol/czu067.
Kumalo F. Health management information systems. In: Ijumba P, Padarath A, editors. South African health review 2006. Durban: Health Systems Trust; 2006. p. 65–76.
Rohde JE, Shaw V, Hedberg C, Stoops N, Venter S, Venter K, Matshisi L. Information for primary health care. In: Barron P, Roma-Reardon J, editors. South African health review 2008. Durban: Health Systems Trust; 2008. p. 195–210.
Ledikwe JH, Grignon J, Lebelonyane R, Ludick S, Matshediso E, Sento BW, et al. Improving the quality of health information: a qualitative assessment of data management and reporting systems in Botswana. Health Res Policy Syst. 2014;12:7. https://doi.org/10.1186/1478-4505-12-7.
MEASURE Evaluation. Tools for Data Demand and Use in the Health Sector Performance of Routine Information Systems Management (PRISM) Tools. Chapel Hill: MEASURE Evaluation; 2011.
World Health Organization. Health metrics network: assessing the national health information system: an assessment tool. – version 4.00. Geneva: WHO; 2008. Available from: https://books.google.co.za/books?id=rNNdgDGxj1UC&printsec=frontcover&source=gbs_ViewAPI&redir_esc=y#v=onepage&q&f=false. Accessed 18 Mar 2016
Patton MQ. Utilization-focused evaluation., 4th edition. Los Angeles: Safe Publications; 2008.
Nutley T, Gnassou L, Traore M, Bosso AE, Mullen S. Moving data off the shelf and into action: an intervention to improve data-informed decision making in cote d'Ivoire. Glob Health Action. 2014;7:25035. https://doi.org/10.3402/gha.v7.25035.
Horch K, Wirz J. People's interest in health information. Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz. 2005;48(11):1250–5. https://doi.org/10.1007/s00103-005-1153-z.
Warburton A, Bate L, Long A. The demand for and use of outcomes information. Health Libr Rev. 1994;11(4):253–61.
Barron P, Pillay Y, Doherty T, Sherman G, Jackson D, Bhardwaj S, Robinson P, Goga A. Eliminating mother-to-child HIV transmission in South Africa. Bull World Health Organ. 2013;91(1):70–4. https://doi.org/10.2471/BLT.12.106807.
Garrib A, Stoops N, McKenzie A, Dlamini L, Govender T, Rohde J, Herbst K. An evaluation of the district health information system in rural South Africa. S Afr Med J. 2008;98:549–52.
MEASURE Evaluation. A review of constraints to using data for decision making: recommendations to inform the design of interventions. Chapel Hill: Carolina Population Center, University of North Carolina; MEASURE Evaluation; 2010.
Harries AD, Zachariah R, Maher D. The power of data: using routinely collected data to improve public health programmes and patient outcomes in low- and middle-income countries. Tropical Med Int Health. 2013;18(9):1154–6. https://doi.org/10.1111/tmi.12159.
Peersman G, Rugg D, Erkkola T, Kiwango E, Yang J. Are the investments in national HIV monitoring and evaluation systems paying off? J Acquir Immune Defic Syndr. 2009;52 Suppl 2:S87-S96. dol: https://doi.org/10.1097/QAI.0b013e3181baede7.
National Department of Health. DHIS standard operating procedures and guidelines. Pretoria: Department of Health; 2011.
National Department of Health. Joint review of HIV, TB and PMTCTProgrammes in South Africa: main report. Pretoria: NDOH; 2014.
Alistar SS, Brandeau ML. Decision making for HIV prevention and treatment scale up: bridging the gap between theory and practice. Med Decis Mak. 2012;32(1):105–17. https://doi.org/10.1177/0272989X10391808.
Bateman C. Inept drug supply management causing stock-outs. S Afr Med J. 2015;105(9):706–7.
Solarsh G, Goga A. Child health. In: Ijumba P, Day C, Ntuli A, editors. South African health review 2003/04. Durban: Health Systems Trust; 2004. p. 101–26.
Kimaro HC, Twaakyondo HM. Analysing the hindrance to the use of information and technology for improving efficiency of health care delivery system in Tanzania. Tanzan Health Res Bull. 2005;7(3):189–97.
Burn A, Shongwe G. Monitoring hospital care in South Africa: development and use of indictors in hospital service management. In: Ijumba P, Day C, Ntuli A, editors. South African health review 2003/2004. Durban: Health Systems Trust; 2004. p. 29–41.
Nicol E. Evaluating the process and output indicators for maternal, newborn, and child survival in South Africa: A comparative study of PMTCT information systems in KwaZulu-Natal and the Western Cape. PhD Thesis. SUNScholar: Stellenbosch University; 2015. http://hdl.handle.net/10019.1/97073.
Thomas DR. A general inductive approach for analyzing qualitative evaluation data. Am J Eval. 2006;27(2):237–46. https://doi.org/10.1177/1098214005283748.
Heywood A, Rohde J. Using information for action. A manual for health workers at facility level. Arcadia, Pretoria: The Equity Project; 2001.
MEASURE Evaluation. Decision maker perceptions in Kenya and Nigeria: an assessment of data use constraints. Chapel Hill: Carolina Population Center, University of North Carolina; MEASURE Evaluation; 2010.
Harrison T. A facility-level assessment of data use constraints in Uganda. MEASURE Evaluation: Chapel Hill; 2010.
Nicol E, Bradshaw D, Phillips T, Dudley L. Human factors affecting the quality of routinely collected data in South Africa. Stud Health Technol Informs. 2013;192:788–92. https://doi.org/10.3233/978-1-61499-289-9-788.
Doherty T, Chopra M, Nsibande D, Mngoma D. Improving the coverage of the PMTCT programme through a participatory quality improvement intervention in South Africa. BMC Public Health. 2009;9:406. https://doi.org/10.1186/1471-2458-9-406.
English R, Masilela T, Barron P, Schönfeldt A. Health information Systems in South Africa. In: Padarath A, English R, editors. South African health review 2011. Durban: Health Systems Trust; 2011. p. 81–90.
AbouZahr C, Boerma T. Health information systems: the foundations of public health. Bull World Health Organ. 2005;83(8):578–83. doi: /S0042–96862005000800010
Chaulagai CN, Moyo CM, Koot J, Moyo HB, Sambakunsi TC, Khunga FM, Naphini PD. Design and implementation of a health management information system in Malawi: issues, innovations and results. Health Policy Plan. 2005;20(6):375–84. https://doi.org/10.1093/heapol/czi044.
Braa J, Heywood A, Sahay S. Improving quality and use of data through data-use workshops: Zanzibar, United Republic of Tanzania. Bull World Health Organ. 2012;90(5):379–84. https://doi.org/10.2471/BLT.11.099580.
MEASURE Evaluation. Improving Demand for and Use of Data Strengthens HIV/AIDS Programs in Rwanda. Chapel Hill: University of North Carolina; 2012.
Mphatswe W, Mate KS, Bennett B, Ngidi H, Reddy J, Barker PM, Rollins N. Improving public health information: a data quality intervention in KwaZulu-Natal, South Africa. Bull World Health Organ. 2012;90(3):176–82. https://doi.org/10.2471/BLT.11.092759.
Kamadjeu RM, Tapang EM, Moluh RN. Designing and implementing an electronic health record system in primary care practice in sub-Saharan Africa: a case study from Cameroon. Inform Prim Care. 2005;13(3):179–86.
Lorenzi NM, Riley RT, Blyth AJ, Southon G, Dixon BJ. Antecedents of the people and organizational aspects of medical informatics: review of the literature. J Am Med Inform Assoc. 1997;4(2):79–93.
Taylor C. Building cultures for sustainability. Oxford Lead J. 2009;1:1–3. Available from: http://www.oxfordleadership.com/journal/vol1_issue1/taylor.pdf. Accessed 04 Apr 2016
Yusof MM, Kuljis J, Papazafeiropoulou A, Stergioulas LK. An evaluation framework for health information systems: human, organization and technology-fit factors (HOT-fit). Int J Med Inform. 2008;77(6):386–98. https://doi.org/10.1016/j.ijmedinf.2007.08.011.
We thank the Western Cape and KwaZulu-Natal Departments of Health, as well as the City of Cape Town for permission to conduct the study. We are grateful to the personnel of the SAMRC Burden of Disease Research Unit, especially Ria Laubscher, William Msemburi and Dr. Lyn Hanmer for their valuable inputs. We also appreciate the co-operation of the staff in the selected facilities, and would like to thank Tammy Phillips, Hlobsile Madida, Zamantsele Cebekhulu and Linda Mbuthini for data collection. We also thank Michelle Galloway, who assisted with language editing of the manuscript.
This study was funded by the Stellenbosch University Rural Medical Education Partnership Initiative (SURMEPI) and an African Doctoral Dissertation Research Fellowship (ADDRF) award offered by the Africa Population and Health Research Center (APHRC) in partnership with the International Development Research Centre (IDRC). The funders had no influence on the study design, data collection, analysis, interpretation of data, and writing of this manuscript. Publication was funded by the International Development Research Center (Grant Number 107508–001) and the John D. and Catherine T. MacArthur Foundation (Grant Number 14–107495-000-INP).
Availability of data and materials
The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
About this supplement
This article has been published as part of BMC Health Services Research Volume 17 Supplement 2, 2017: Research for health systems strengthening in Africa: studies by fellows of the African Doctoral Dissertation Research Fellowship (ADDRF) program. The full contents of the supplement are available online at https://bmchealthservres.biomedcentral.com/articles/supplements/volume-17-supplement-2.
Ethics approval and consent to participate
Ethics approval for the study was obtained from the University of Stellenbosch Health Research Ethics Committee (Ethics reference number: N10/10/322). Written permission to access data and health facilities was also granted by the relevant health authorities (Western Cape Provincial Department of Health reference number: RP03/2011; Western Cape City Health ID No: 10,211; and KwaZulu-Natal Department of Health reference number: HRKM174/10). Written consent was obtained from all key informants and participants.
Consent for publication
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.