Table of contents
- Learn More
- Meet the Cast: Users
- The Solution
- Learn More
When co-founder and CEO Chris was 18, he made a promise to change the perception of what a student could do. Sidekick makes good on this promise by allowing young adults to work side-by-side with industry experts to complete real-world projects while ensuring they achieve greater academic outcomes than traditional instruction.
To avoid redunancy with the new questions, we've shortened this aspect of the online submission. If you're interested in learning more about the problem, solution, or business model, check out the new questions or review the attached Executive Summary!
Two decades of research show that authentic-audience, project-based learning is potentially the most comprehensively effective learning method, with 3% to 300% improvements in academic and high-stakes testing outcomes. Studies have shown that project-based learning also can be an effective lever for closing the achievement gap across sex, race, and socioeconomic status. These benefits aren't secret--80% of professors report wanting to bring learning through relevant real-world projects to their classrooms. Yet only 1% do.
This is because it is 3 to 5 times more effort to first find a real-world partner with a suitable problem to solve, align the project to desired learning outcomes, and then adapt the supporting instruction and materials to the inevitable twists and turns that come with a real-world project.
users: meet the cast
Early into the Refinement Phase, we recognized that a critical assumption that had long gone unquestioned was the adoption and engagement rates of students. Students input is crucial to the long-term sustainability of Sidekick, because the inputs they provide is what feeds our AI.
We conducted 15 student user interviews at multiple universities in New York and Illinois. From these exploratory interviews, we learned how little we knew about these users! But we also recognized a couple promising themes:
- "Stay on top of my studies:" far more top-of-mind than careers were simply doing well in class. This paired with the confirmation that students struggle working in groups, presents an opportunity to fill an academic need while serving our corporate recruiters' professional one by building a companion app for students that proactively drops "breadcrumbs" on how to solve problems in exchange for activity data.
- "Very, very undecided:" Even late into undergraduate careers, students often did not know what they were doing after school. They use internships to explore what they like, but they can only do so many of those, and they are substantial commitments; meanwhile, most new hires know within the first week whether they will stay at a company or not. They also found career advisement centers sometimes underserve them, leaving an opportunity to bring real-world skills and career exploration directly into the classroom.
We also reviewed dozens of prior interviews of other all four key users. The following personas were developed from these data.
(It should be noted that our business model ultimately hinges on payments from corporate recruiters, but given our recent focus on Noah, for this competition we wanted to think about how our product helps the students.)
How do you assess the individuals within a team? Would you be able to identify if the team is a Moneyball GPA
We've integrated a few ways to assess and ensure learning and progress for individuals (although it's worth noting that the research here suggests this is not as large a concern as it intuitively seems). First, however, it's important to know what the underlying mechanisms used are.
- We have several incremental checkpoints built into projects where students complete authentic assessments in the form of actually-useful work products. These provide us snapshots of the student team's activities and progress along the project, but they also slot in nicely for traditional evaluation from professors, TAs, and peers.
- Unlike most innovation competitions where concept development resembles a traditional funnel, we expect sponsoring project partners to check in on a regular cadence. We obviously don't expect partners to have to weed through every single concept every single time, so we build in a peer evaluation first where teams pitch their hypotheses or solutions and other teams can "Adopt" the same pitch. This is similar to OpenIDEO's "Inspired by" or "Create Team" options, except an adopting team is effectively giving up their own idea for someone else's. They have to have some actual conviction in the adopted idea, so they take a more rigorously critical eye, certainly more than one would to give a Like or an Upvote and even more than if crediting someone with contributions. The count of pitches adopted then becomes an important proxy for the quality of the student team's pitches.
- Project partners stay on the platform after project completion to report back on a) whether the final deliverable was implemented and then eventually b) whether it was successful (the Sidekick team uses reengagement campaigns when necessary). These milestones are then linked back to the project and the student teams that contributed to the final deliverable.
Now, to answer the direct question:
- Many professors and TAs are not new to group work, and those that are are still trained educators and experts in their disciplines. Our work products give them the freedom to assess work against their personal or institutional standards. Also, because these projects are automatically managed and much of the learning is self-directed, educators are freed up to observe, intervene, and evaluate students individually.
- Sidekick gets more powerful with repeated use. Although educators are free to override Sidekick's recommendations, our team matching algorithm balances diversifying team capabilities (shown to improve learning for the lowest-performers on the team), clustering students around what they need to learn, and assigning teammates in a "round-robin" fashion. This allows individuals to start to differentiate around adoption, implementation, and success counts (what's also exciting is the potential covariances between team members and the insights that will reveal!). It is a bit similar to the Sabermetrics approach proposed by Moneyball GPA.
- Finally, don't count out existing assessments like tests! While the future may find a better tool for assessment, our goal is to create something that can be used today while riding that wave into the future. Research shows that real-world project-based learning, when done well, results in better performance on high-stakes testing.
Would you be able to give me an example of a project?
An early project partner is an international nonprofit that runs a network of schools in emerging markets for children marginalized by disabilities. They've seen great success and are looking to expand their operations. The problem, however, is local public opinion. These disabilities are still taboo in some areas and community resistance and restrictive local policies are mounting. Student teams were tasked with delivering an overview of the cultural and historical context the organization was operating in and a recommendation to the key question: what is their best next step? Should they look to raise money in the States (where they're headquartered) to combat the resistance or should they invest their time more in building bridges with local leaders? Naturally they should do both--where should their priorities be?
Such a project obviously fits into several disciplines, and student teams in independent studies or cross-disciplinary courses could tackle the project naturally while Sidekick ensures they learn. Or, educators can "filter" on the desired topic, adding constraints on how students can approach the problem (this limits authenticity in our view, but it's an understandable decision that we leave up to the educator). Because we source teams across several classes or even institutions, we diversify the disciplines on the project to ensure the partner is still receiving quality options.