OpenIDEO is an open innovation platform. Join our global community to solve big challenges for social good. Sign up, Login or Learn more

Sidekick Learning

Our AI collects data on how students solve problems during company-sponsored projects then uses it to connect learners to ideal careers.

Photo of Chris

Written by


When co-founder and CEO Chris was 18, he made a promise to change the perception of what a student could do. Sidekick makes good on this promise by allowing young adults to work side-by-side with industry experts to complete real-world projects while ensuring they achieve greater academic outcomes than traditional instruction.

Problem we're solving

Two decades of research show that authentic-audience, project-based learning is potentially the most comprehensively effective learning method, with 3% to 300% improvements in academic and high-stakes testing outcomes. Studies have shown that project-based learning also can be an effective lever for closing the achievement gap across sex, race, and socioeconomic status. These benefits aren't secret--80% of professors report wanting to bring learning through relevant real-world projects to their classrooms. Yet only 1% do.

This is because it is 3 to 5 times more effort to first find a real-world partner with a suitable problem to solve, align the project to desired learning outcomes, and then adapt the supporting instruction and materials to the inevitable twists and turns that come with a real-world project.


Instead of manufacturing projects that align with academic outcomes, Sidekick starts with a partner's real-world, meaningful project and uses a set of proprietary machine-learning algorithms to predict the most likely natural learning outcomes from the project based on a postsecondary ontology that can be further customized to the desired institution or discipline. We then add a curricular layer of instructor’s notes, student self-directed learning resources, and authentic assessments that help instructors monitor progress and ensure learning outcomes are met. Our system not only makes project-based curriculum development faster and easier, but as the students move and pivot through the project, the curriculum moves and pivots with them so instructors are never caught unprepared.

Finally, as students move through a project, our software will collect step-by-step data on how students solve problems. In doing so, we will be simultaneously classifying the kinds of problems that exist in industry and profiling students’ strengths at solving various kinds of problems. That data will bring together learners and companies around the one question that companies care about: how well can you solve my problem? That’s something resumes, portfolios, LinkedIn, or even other predictive analytics talent tools will never be able to do.

Companies sponsor projects, which are tackled by student teams across multiple classes, schools, or even institutions in an innovation challenge fashion (similar to OpenIDEO). Companies may connect and recruit teams based on the quality of their solutions. They also get to sample dozens if not hundreds of approaches to their problem with detailed analytics into the kind of problem(s) the company has. They can use this information in both business strategy and talent recruitment. To the latter, Sidekick also profiles students according to which kinds of problems they search these problem-solving profiles to find and recruit talent.


How do you assess the individuals within a team? Would you be able to identify if the team is a Moneyball GPA

We've integrated a few ways to assess and ensure learning and progress for individuals (although it's worth noting that the research here suggests this is not as large a concern as it intuitively seems). First, however, it's important to know what the underlying mechanisms used are.

  1. We have several incremental checkpoints built into projects where students complete authentic assessments in the form of actually-useful work products. These provide us snapshots of the student team's activities and progress along the project, but they also slot in nicely for traditional evaluation from professors, TAs, and peers.
  2. Unlike most innovation competitions where concept development resembles a traditional funnel, we expect sponsoring project partners to check in on a regular cadence. We obviously don't expect partners to have to weed through every single concept every single time, so we build in a peer evaluation first where teams pitch their hypotheses or solutions and other teams can "Adopt" the same pitch. This is similar to OpenIDEO's "Inspired by" or "Create Team" options, except an adopting team is effectively giving up their own idea for someone else's. They have to have some actual conviction in the adopted idea, so they take a more rigorously critical eye, certainly more than one would to give a Like or an Upvote and even more than if crediting someone with contributions. The count of pitches adopted then becomes an important proxy for the quality of the student team's pitches.
  3. Project partners stay on the platform after project completion to report back on a) whether the final deliverable was implemented and then eventually b) whether it was successful (the Sidekick team uses reengagement campaigns when necessary). These milestones are then linked back to the project and the student teams that contributed to the final deliverable.

Now, to answer the direct question:

  1. Many professors and TAs are not new to group work, and those that are are still trained educators and experts in their disciplines. Our work products give them the freedom to assess work against their personal or institutional standards. Also, because these projects are automatically managed and much of the learning is self-directed, educators are freed up to observe, intervene, and evaluate students individually.
  2. Sidekick gets more powerful with repeated use. Although educators are free to override Sidekick's recommendations, our team matching algorithm balances diversifying team capabilities (shown to improve learning for the lowest-performers on the team), clustering students around what they need to learn, and assigning teammates in a "round-robin" fashion. This allows individuals to start to differentiate around adoption, implementation, and success counts (what's also exciting is the potential covariances between team members and the insights that will reveal!). It is a bit similar to the Sabermetrics approach proposed by Moneyball GPA.
  3. Finally, don't count out existing assessments like tests! While the future may find a better tool for assessment, our goal is to create something that can be used today while riding that wave into the future. Research shows that real-world project-based learning, when done well, results in better performance on high-stakes testing.

 Would you be able to give me an example of a project?

An early project partner is an international nonprofit that runs a network of schools in emerging markets for children marginalized by disabilities. They've seen great success and are looking to expand their operations. The problem, however, is local public opinion. These disabilities are still taboo in some areas and community resistance and restrictive local policies are mounting. Student teams were tasked with delivering an overview of the cultural and historical context the organization was operating in and a recommendation to the key question: what is their best next step? Should they look to raise money in the States (where they're headquartered) to combat the resistance or should they invest their time more in building bridges with local leaders? Naturally they should do both--where should their priorities be?

Such a project obviously fits into several disciplines, and student teams in independent studies or cross-disciplinary courses could tackle the project naturally while Sidekick ensures they learn. Or, educators can "filter" on the desired topic, adding constraints on how students can approach the problem (this limits authenticity in our view, but it's an understandable decision that we leave up to the educator). Because we source teams across several classes or even institutions, we diversify the disciplines on the project to ensure the partner is still receiving quality options.

Who is your idea designed for and how does it reimagine higher education to support the needs of tomorrow?

Our paying users are Heads of HR and Talent Directors. The *most important* users are ethnic minority and 1st-generation undergraduates. Both college persistence and STEM career representation of these students are abysmal. Our solution helps higher education systems address both major issues because we a) provide real-world project-based learning proven to have outsized academic gains for these students, b) give students chances to develop skills and networks, and c) reduce hiring biases.

This idea emerged from:

  • A group brainstorm
  • A student brainstorm

What skills, input or guidance from the OpenIDEO community would be most helpful in building out or refining your idea?

We have a working proof of concept for the simplest version of our software, but as our algorithms get more sophisticated we could use additional data science expertise and suggestions. Additionally, a big risk for us is ensuring students and professors adopt the authentic assessments we provide, which is how we collect data. Strong product and UX skills as well as all additional insights into our users' behaviors, thought processes, and needs would be a big help.

What early, lightweight experiment might you try out in your own community to find out if the idea will meet your expectations?

We're in the process of implementing of our first major pilots with educational institutions in March. In the meantime, we can use an MVP (sales collateral) to test "Mafia Offers" with potential sponsors (companies). This is as simple as starting with our immediate networks' employers.

Additionally, we need to prove that students will use our value-added authentic assessments throughout the project, which is critical to data collection. This we can test with concierge pilots.

Tell us about your work experience:

Chris leads product and technology and comes from startups and finance, most recently as an analytics product manager before pursuing his MBA. Jeremy heads up curriculum development and strategy and comes from education as a TFA alum who spent 6 years in a college prep classroom. Brian handles operations and business development and started his own education foundation back in Hong Kong. Safia manages relationships with corporate partners and higher ed accounts and hails from tech and education.


Join the conversation:

Photo of Joel

Hi Chris, perhaps look into Riipen, a Vancouver-based edtech venture. It might help further inspire your idea, as I see it being quite unique and innovative. Congrats.


View all comments