Internews Center for Innovation & Learning

Internews Center for Innovation & Learning
Twitter icon
Facebook icon
RSS icon

The learning curve

[Guest blog post written by Tara Susman-Peña, Senior Research Officer for Internews]

Graphing the number of mobile surveys and paper surveys completed during each day of the pilot shows a clear improvement in the enumerators’ data collection capabilities. 

Number of mobile and number of paper surveys collected, Dadaab pilot:


Total number of surveys each day: 

Day 1

Day 2

Day 3

Day 4








The target sample size was 525 surveys – 500 plus an additional 25 in case we had to throw away any that were completed incorrectly. The sample was collected in each of the 5 camps: Dagahaley, Hagadera, Ifo, Ifo2, and Kambioos. The number of people surveyed in each camp was proportionate to the size of each camp. The enumerators became so efficient that on the final day, they were able to end early and did not need to collect as many surveys as in the previous days.

The improvement in administering the survey can be seen in the increase in number of total surveys each day, and in particular a huge increase from Day 1 to Day 2 in the number of surveys collected on the mobile phones and a corresponding decrease in the use of paper surveys. Even though there was a shift in emphasis toward mobile, with the Captricity capability, there is no need to ever go completely mobile. Beyond having paper surveys on hand in case of technical failure, we also found that some respondents were more comfortable with an interview recorded on paper. In some cases, they feared that the phone might be used to audiotape or video-record them. In other cases, they wanted to see a mark being made with a pencil so that they knew for sure that an answer was being recorded. So, paper surveys are an important backup.  

What isn’t visible in the graph is the qualitative improvement that we observed each day as well. There was a clear improvement in the enumerators’ facility and comfort with the phones, and also in their ability to correctly and efficiently do the paper-based surveys.  Interestingly, the number of paper surveys increased between Day 2 and Day 3. We sat with the supervisors and looked over the paper surveys at the end of each day, and sent back any surveys we found with errors that rendered the data unusable. Each morning, the supervisors met with their teams, gave specific feedback on paper surveys, and confirmed the sampling plan for the day.  

The final test of how the enumerators did (and how the pilot did overall) will come out of the data analysis. Did this exercise produce useful, actionable data? Can the data be digested quickly? The FormHub team is still working on the analysis software, Bamboo. But the hope is that data collection and analysis will be one very streamlined process in the next phase.

Next steps for the Humanitarian Data Toolkit pilot:

  • Finalize and test Bamboo, the analysis/data visualization tool
  • Finish the integration of the data collected on paper and the data collected by phones (we’re getting there!)
  • Finalize the guidebooks for the toolkit and the research
  • Write the evaluation of the pilot
  • Plan the pilot’s next phase…


Global Issues: 
Tools Used: