OpenIDEO is an open innovation platform. Join our global community to solve big challenges for social good. Sign Up / Login or Learn more

Profile

Recent comments

(3) View all

Hi Kathleen,

Thank you for writing such a thoughtful response. Here are a couple resources that I think may help:

Regarding mobile data collection, limited mobile phone coverage is no longer a barrier. There are numerous platforms out there, and many offer offline data collection. Your organization, a community member, or whoever, can collect data offline on a device. Then when they have connectivity or coverage, they can sync the results. Here is an excellent overview comparing and contrasting 15 popular options http://lwrdmel.weebly.com/uploads/1/4/3/7/14377648/overview_of_mobile_data_collection_platforms.pdf - Nuru International uses QuickTapSurvey, which I would also recommend checking out.

Regarding feedback, Harvard Business Review has a couple forums that aggregate articles and forthcoming research. https://hbr.org/topic/giving-feedback https://hbr.org/topic/receiving-feedback Unfortunately, I do not have access to a freely accessible training on this.

Please feel free to reach out to my email if I can be of any further assistance: matt.lineal@nuruinternational.org

Hi Ellen,

It was a pleasure to read through your concept. As a reviewer, I wanted to provide feedback on your core outstanding questions. My precept in making this comment is that the right answers will come from local partners, VSLA leadership and other stakeholders. That said, I hope my experiences and outlook from a decade of integrated international development efforts in Honduras, Kenya, Ethiopia and Nigeria can contribute in some small way.

1. What type of proxy indicators can we use to measure user income?

Income is notoriously difficult to measure with smallholder farmers in rural areas. Income sources can be highly varied and hard to quantify, reporting barriers are high, and records are scarce. Instead of pointing to specific indicators, as these may be context and livelihood specific, I'll mention a few general categories.

Livelihood zoning and rural livelihood economics can provide a helpful snapshot for a given population segment. This will incorporate an analysis of sources of income, amount, seasonal calendars, risks, and coping strategies. Understanding livelihoods can provide invaluable context in program design and implementation, particularly in scaling up an intervention.

Gross margin analysis is an incredibly helpful tool for complying with "do no harm" principles and evaluating outcomes, particularly with loan-based or livelihoods interventions. Relying on financial models, local data, and expert analysis, a reasonably accurate estimation of impact of a particular income-generating activity can be assessed. That said, this tool is often best suited to evaluate a particular activity, not all the income a rural household may earn. This tool can be used to calculate changes even over very short periods of time.

Lastly, but certainly not least, are asset-based indicators for assessing wealth status. There are numerous systems here, each with its relative merits and drawbacks. The more categorical nature of these types of asset-based indicators can generally make it harder to pick up on smaller scale change over time, and so longer time frames may be necessary to measure change (years not days or months).

2. At times Plant With Purpose collects baseline data in areas where we don’t currently work. How can we mitigate unintended expectations from baseline participants in these areas?

Recognizing the time poverty that smallholders farmers face is appreciated and the right starting place to root this concern. If the organization has the will, intent and means to scale up to those areas eventually, it may be sufficient to express the intent to work in that area in the future. In my professional experience, while the time horizon of communities may not value a promise that will come to fruition a year or two away, the communities were invariably welcoming when the intervention did start up there. If the project has no intent in working in those areas, then there should be a careful consideration of what is most appropriate for the context and honors the time of the participants. In some circumstances, it may be thanking them for their time and being incredibly frugal with the time requested when scouting new potential areas that might not be good fits. On the other hand, some countries and contexts have gauged it appropriate to pay a token sum of money for the participants' time. However, this should be done with great caution, as it can promote dependency and be counteractive to sustainability.

3. We find that in rural areas there tend to be fewer sources of data and we primarily rely on self-reported information. Often this information is based on participants’ feelings or what they choose to share. Is this type of data sufficient? Is there another way to collect objective data and/or verify the participant data that we collect?

Self-reported data provide incredibly rich sources of information. To inform certain decisions, in fact, self-reported information from participants is the only right source of information. There are ways to strengthen and compliment this information.

One way is to standardize questions asked and repeat them over time, to then track change. These can also be GPS-tagged so they are georeferenced. Tracking trends over time and space add a whole new dimension to the data.

Imposing quasi-experimental design by having a non-intervention comparison group is another way to bolster the utility of results. With the proper design, self-reported data can provide a basis for demonstrating attributable impact of the intervention.

Yet another dimension of self-reported data is to conduct deep dives. There are participatory community-based evaluation techniques, qualitative methods, and focus group discussions as just a few of the general ways to drill down into the data.

- I hope this feedback provides helpful food for thought. Thank you for sharing and reviewing my comments. Sincerely, Matt.

Hi Kathleen,

The Chiseka Beekeeping Project provided a thoughtful and robust proposal. As a reviewer, I want to provide feedback on your questions of concern. Out of a place of deep respect for local communities and implementing partners, I want to recognize that the right answers will come from the people in the communities working with the project. I may provide some examples and insight that worked other places, although of course these will need to contextualized and gauged for appropriateness.

1. Given the historic presence of nonprofits in Malawi and the lingering power dynamic related to this history, how might we ensure we are getting honest feedback during user interviews (both in assessing needs before program launch and successes/challenges after the program launches)? In particular, what strategies can a small nonprofit employ given limited team capacity?

First, let me start by saying that placing a high value on gathering honest feedback means you are already more than halfway there. Gathering honest feedback may involve shaping culture as well as employing the right mechanisms and tools. Providing training on feedback and its value can start to shape culture around feedback in the community and with the respondents. In contexts where feedback might be seen as disrespectful or inappropriate, having a conversation about the value of feedback may open up a conversation.

Once you have established an understanding of what feedback is and why it is important, then it becomes about using the right tools and strategies. Mobile data collection including responses on net promoter score could be one way to start to get a picture together. From there, individual interviews, focus group discussions, participant observation, or other techniques may best work to gather detailed feedback.

Lastly, once you have feedback on what's working and what's not working, it's critical to take action based on this information. Then go back to the respondents, involve them in the response as appropriate. This may mean taking action together, or at least informing them of the action taken based on their feedback. After corrective action is taken, gather more feedback.

Shaping a culture of feedback reinforced by action will begin to slowly add to a reinforcing cycle.

2. In addition to early and frequent community sensitization meetings, how might we best mitigate expectations for support while conducting baseline surveys, as not all those interviewed will be chosen for the program (but all answers are important for baseline data)? More specifically, how might we best mitigate resentment towards the project and/or jealousy towards those participating?

Being sensitive to the time poverty people face is very thoughtful and appreciated. I've seen two strategies work here, although they were both highly context dependent. On the one hand, you can compensate non-participant respondents for their time in participating in surveys - whether for a baseline or later as a non-intervention comparison. It is with great caution this strategy may be used, as in some contexts it may further dependency and promote a poor image of projects, whereas in other contexts it may be entirely appropriate and expected. Moreover, such incentives can and do influence the way respondents answer questions. Another strategy is for non-participant respondents to be invited to participate in the project after the particular evaluation or research is complete. For instance, have them join in a second phase after the three year project is complete. Again, this is highly context dependent, as the participants may not value a commitment so far in the future, and the project may not anticipate ever scaling up.

3. Peer education is a critical component to maintaining and scaling our program. Outside of monetary incentives, what strategies or approaches might we employ to encourage and empower peer mentors to engage with their beneficiary groups?

One practical way to sustain community-based volunteerism is through offering an unofficial unpaid role with the project. This may include outfitting volunteers with a shirt and/or a badge that validates them as mentors with the program. I have seen this be an effective motivator, whether volunteers enjoy the benefits of perceived increased social status, or practically qualifying their efforts as work experience. With this approach, managing expectations with participants and keeping compliant with HR laws is critical.

Thank you for considering my feedback. It was a joy to read about your project! Matt