Yes! The Tools Competition is eager to hear from participants from across the globe. Participants must be able to accept funds from US based entities.
Individuals and entities residing in Cuba, Iran, North Korea, Russia, Sudan, and Syria are not eligible to participate in the competition.
Yes! We are eager to hear and support individuals who are new to the field. We encourage you to compete in the Catalyst award level to be more competitive. Please see more information below on award levels and take the eligibility quiz for more guidance.
The Tools Competition seeks to spur new tools and technology. This means that something about the proposal needs to be fresh, innovative, or original. This does not mean you have to create a new tool or new platform.
Proposals seeking a Growth Phase or Transform Phase award must build off an existing platform of varying levels of development and scale. This might be an API that will improve the platform or a new tool to improve effectiveness. Or it could mean adding infrastructure that allows external researchers to access your data.
See more about award levels and eligibility requirements here.
The competition has four ‘tracks’ or priority areas that reflect the pressing needs and opportunities in education. Competitors will be required to select one of the tracks in which their submission will be primarily evaluated. Competitors can also select a secondary track.
The competition tracks include:
Each track has somewhat different requirements and eligibility criteria and certain tracks may be more or less competitive than others depending on final funding allocation and the number of competitors in each track. Tracks may also have different total prize purses, depending on sponsor priorities.
See more on each track here.
The Phase I submission form will ask you to select a primary track and a secondary track. Your primary track should be the track that is the best match.
If competition organizers invite you to Phase II, they will carefully review the proposal to confirm your track or recommend a new one.
Consider the following recommendations:
Growth or Transform Phase competitors in the Learning Science Research track are eligible to receive a supplemental award of $100,000 for district partnerships, if the district has at least 10,000 students of which the majority come from historically marginalized populations. See more below.
Complete the eligibility quiz to determine which award level best fits your proposal.
Growth or Transform Phase competitors in the Learning Science Research track are eligible to receive a supplemental award of $100,000 for partnerships with a district or consortium of districts with at least 10,000 students of which the majority come from historically marginalized populations. The partnership must include:
If you are entering the competition as a district or consortium of districts, you are also eligible to compete, as long as you are partnered with a researcher.
Refer to the Official Rules for full eligibility requirements.
Proposals will be evaluated based on whether they are clear, concise, actionable, and attainable, with budgets that are aligned and realistic with what’s being proposed. Judges will evaluate how you will maximize your impact.
Indirect costs must not exceed 10 percent of the total budget. Other than that, there are no specific requirements on what costs are allowed or not allowed (within reason, of course).
For this track, we are looking for tools that both capture traditionally unmeasured elements of learning and development and improve the quality of assessments to better meet the needs of educators, students and families while reducing the time to develop, administer, or interpret them. All forms of assessment – diagnostic, formative, summative, direct-to-family – are eligible.
This year’s competition is especially focused on new ideas that focus on one or multiple of the following areas:
For examples of other promising innovations in assessment, review last year’s Assessment Track winners.
For this track, we are looking for tools that cultivate or support prospective, developing, and established teachers to improve their practice and maximize learning for all. Tools that support teacher retention, satisfaction, and effectiveness across schools are encouraged.
Just as technology has the potential to personalize and improve learning for Pre-K to secondary students, the same is true for adults. Schools of education, school districts, and other teacher development entities can leverage tools to prepare educators for the classroom as well as offer data and feedback to inform educators’ instructional decisions or improve practice.
As an example, consider Teaching Lab Plus, a 2020 Tools Competition winner, that will collect effectiveness data on professional learning programs in order to improve current programs. Or, a simulation that allows teacher candidates to practice how they would respond to difficult moments in a classroom and receive real-time feedback.
For this track, we are looking for tools that accelerate the learning science research process in order to improve learning interventions. Tools may facilitate A/B testing and randomized controlled trials, improve research design, promote replication, or release knowledge and data for external research. This year, there is a competitive priority for proposals that directly address or could be applied to math instruction.
Please review last year’s winners for examples of competitive proposals in the Learning Science Research track.
The competition is eager to promote tools that are developed in consultation with practitioners. As a result, this year, Growth or Transform Phase competitors in the Learning Science Research track are eligible to receive a supplemental award of $100,000 if they partner with a district or consortium of districts with at least 10,000 students of which the majority come from historically marginalized populations. The district partners would co-design research questions, implement the tool with at least 3,000 students, and incorporate the research findings into district instruction or policy.
For this track, we’re looking for tools that accelerate outcomes in literacy and math and increase relevance of instruction to prepare students for college and careers. Tools should have an equity focus, addressing the declines in academic progress across different races, ethnicities, socioeconomic groups, geographies and disability statuses. The competition also aims to support making knowledge and skills more relevant.
Please review last year’s winners for examples of competitive proposals in the K-12 Accelerated Learning track.
The Tools Competition has a phased selection process in order to give competitors time and feedback to strengthen their tool and build a team. Proposals will be reviewed at each phase and selected submissions will be invited to submit to the next round.
For more information refer to our How to Compete page.
Proposals will be evaluated against others within the same track. Proposals at higher award levels will be subject to greater scrutiny. At each stage of the competition, reviewers will evaluate proposals based on eligibility requirements for the award level as well as:
For more information on eligibility criteria, refer to the Official Rules.
Interested competitors are also welcome to reach out to ToolsCompetition@the-learning-agency.com with questions or feedback.
Additional avenues for support, including 1:1 feedback calls and office hours, will be emailed out to our email list, so please make sure to sign up for updates here.
We also recommend joining the Learning Engineering Google Group. Opportunities for partnership and additional support are also frequently posted there.
If you need help identifying a researcher, please reach out to Toolscompetition@the-learning-agency.com. We have a large and growing network of researchers who can assist platforms with:
We can facilitate connections to researchers through individual requests or broader networking listservs and events.
Competitors seeking a Growth Phase or Transform Phase Award must have commitment from one or more external researchers that they are interested in using the data from their platform by the time they submit their detailed proposal for Phase 2, which is due February 19th, 2023.
This does not need to be a formal agreement, and the researcher does not need to have already secured funding. Instead, we want to see that you have started forming partnerships with external researchers to share your data and consider how that will require you to adapt your tool.
Most importantly, the tool must be designed so that multiple researchers can access data from the platform over time. Given this, we assume that if the researcher you are working with falls through for a reason, you will be able to establish another partnership quickly.
Competition organizers will evaluate proposals based on their commitment and plans to integrate perspectives and feedback from the stakeholders who have the most to benefit: students (or learners), families, and educators.
Digital learning technologies touch many different stakeholders. Developers create new technologies or platforms. Buyers (many times procurement officers in public entities that oversee education systems or schools) evaluate the merit of different platforms and select which ones should be made available to various schools. Researchers leverage data from digital learning platforms to better understand what is effective for learners.
While all of these stakeholders are critical in the edtech landscape, the stakeholders benefiting from or using the technologies – students, families or educators – have the most at stake in the design, development and implementation of new tools and interventions. But they are often the most overlooked.
Students’ input is critical across all interventions, as they are the ultimate beneficiaries, even of tools directed towards families and educators. Families and educators are key, as they provide valuable insights into students’ learning process, and in many cases are needed to implement the tools with learners. Given this, it's especially important to seek out these stakeholders' perspectives and input is critical at every stage of development of new innovations.
The Tools Competition encourages competitors to solicit and incorporate input from students, families and educators throughout the design, development and implementation of their new tool.
Explicitly, no. Many tools will be specifically designed to support only one of those groups. That said, the experiences and needs of students, families, and educators are closely related. As a result, it is helpful to engage and receive feedback from all three groups when designing, developing and implementing a new tool or intervention.
For example, 2021 winner M-Powering Teachers , which uses natural language processing to analyze how math teachers instruct and interact with students, is designed to provide educators with actionable feedback in order to improve their practice. Developers must pay particular attention to input from educators in order to tailor the content and representation of feedback so it is welcome and actionable; however, in order to provide meaningful feedback to educators, the tool should also incorporate research and student perspective on effective student-centered learning. This can empower teachers with the capacity to better differentiate for the unique needs of learners and encourage autonomy in how students direct their own learning paths.
2020 winner Springboard Collaborative designed a direct-to-family tool that allows caregivers to assess their child’s reading level. For this tool, caregivers are the direct user and their experience with the tool is most likely to drive overall impact. Yet, in order to maximize the value of this assessment, the tool should closely mirror the types of assessments educators will administer in school and the language educators will use to discuss student performance with families.
The competition will assess the extent to which the tool addresses a clear need demonstrated by the stakeholders that will directly use the tool. It will also evaluate the likelihood that learners, families or educators will use data and insights generated by the tool to improve outcomes.
In other words, proposals should address the following questions:
Regardless of team’s size, everyone can begin to engage learners, families, and educators in their design. Start small, even if you are struggling to find stakeholders to engage or are unsure how best to tailor questions to inform your strategy. Consider:
You may encounter certain challenges as you aim to incorporate demand from learners, families, and educators. For instance, it can be hard to access diverse groups of students, educators and families to ensure a representative sample of feedback. Also, for tools that are supported especially by young students, it can be difficult to design questions to get meaningful feedback or even get permission to address them.
We are here to support your team in thinking through your approach to incorporating the perspective of those ultimately benefiting from your tool. Reach out to ToolsCompetition@the-learning-agency.com for support.
Winners will receive their award by check or bank transfer in two installments.
Winners will receive the first installment soon after winning. Winners will receive the second installment of the award after Product Review Day if they are making sufficient progress on the plan they outlined in their Phase 2 proposal.
Winners will present during a virtual Product Review Day to their peers and others in the field to get feedback and perspective on their progress.
Approximately one year after winners are notified, winners will convene again to present their progress in a Demo Day.
Yes! We strive to support all competitors, not just winners. At each phase, the organizers will compile lists of opportunities for additional funding, support, and partnership.
We also encourage your team, if not selected, to stay in touch with the organizers through ToolsCompetition@the-learning-agency.com and the Learning Engineering Google Group.
Competition organizers are eager to support winners and learn from their work to inform future resources for competitors and winners. To do so, all winners will participate in an impact study during which research advisors will work with you to incorporate new measures into your internal evaluation process. In addition, all winners will complete two surveys each year for 3-5 years after winning. That will include completing two surveys annually.
The learning engineering approach is critical because the current process to test and establish the efficacy of new ideas is too long and too expensive. Learning science research remains slow, small-scale, and data-poor, compared to other fields. The result is that teachers and administrators often have neither proven tools nor the research at hand they need to make informed pedagogical decisions. Learning engineering aims to solve this problem using the tools of computer science.
For individual platforms, the learning engineering approach is important because it allows for platforms to engage in rapid experimentation and continuous improvement. In other words, learning engineering allows for platforms to quickly understand if an approach works and for whom and at what time. This is central to scaling an effective product and generating high quality data.
Far too often, education research proves to be a frustrating process. Experiments often take years. Costs are high, sometimes many millions of dollars per study. Quality is also inconsistent, and many studies have small ‘n’ sizes and lack rigorous control. Similarly, the field lacks high-quality datasets that can spark better research and richer understanding of student learning.
Part of the issue is that learning is a complicated domain that takes place in highly varied contexts. Another issue is that the subjects of the studies are typically young people and so there are heightened concerns around privacy.
But the consequences of weak research processes are clear, and in education, experts often don't know much about what works, why it works, for whom it works, and in what contexts.
Take the example of interleaved practice, or mixing up problem sets while learning. Research into middle school math has established that students learn better when their practice is interleaved, meaning students practice a mix of new concepts and concepts from earlier lessons. But it’s an open research question how far this principle extends. Does interleaved practice work equally well for reading comprehension or social studies? Does it work for younger math students too? Does the type of student (high-achieving versus behind) matter?
This lack of knowledge has important consequences, and far too much money, time, and energy is wasted on unproven educational theories and strategies.
Learning engineering, at its core, is really about three processes:
Some but not all platforms will partner with researchers to better learn what’s working best for students. These findings can then be shared with the community at large to help improve learner outcomes everywhere.
Consider these questions:
Does your platform allow external researchers to run science of learning studies within your platform?
If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.
Does your platform allow external researchers to mine data within your platform to better understand the science of learning?
If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.
If the answer to either of the above questions is “no,” then we highly recommend that you partner with a researcher to help you think through how to begin to instrument your platform as part of the Tools Competition.
See more below for how to instrument your platform.
Instrumentation is building out a digital learning platform so many external researchers can engage in research. To be more exact, the platform is offering its data as an “instrument” to do research. In this sense, instrumentation is central to learning engineering; it is the process by which a platform turns their data into a research tool.
One primary way to instrument is by building a way for external researchers to run A/B experiments. Several platforms have created systems that allow external researchers to run their research trials on digital platforms. In other words, the platforms have “opened up” their platforms to external researchers. These platforms facilitate large-scale A/B trials and offer open-source trial tools, as well as tools that teachers themselves can use to conduct their own experiments.
When it comes to building A/B instrumentation within a platform, the process usually begins with identifying key data flows and ways in which there could be splits within the system. Platforms will also have to address issues of consent, privacy, and sample size. For instance, the average classroom does not provide a large enough sample size, and so platforms will need to think about ways to coordinate across classrooms. A number of platforms have also found success building “templates” to make it easier for researchers to run studies at scale.
One example of this approach is the ETRIALS testbed created by the ASSISTments team. As co-founder Neil Heffernan has argued, ETRIALS “allows researchers to examine basic learning interventions by embedding RCTs within students’ classwork and homework assignments. The shared infrastructure combines student-level randomization of content with detailed log files of student- and class-level features to help researchers estimate treatment effects and understand the contexts within which interventions work.”
To date, the ETRIALS tool has been used by almost two dozen researchers to conduct more than 100 studies, and these studies have yielded useful insights into student learning. For example, Neil Heffernan has shown that crowdsourcing “hints” from teachers has a statistically significant positive effect on student outcomes. The platform is currently expanding to increase the number of researchers by a factor ten over the next three years.
Carnegie Learning created the powerful Upgrade tool to help ed tech platforms conduct A/B tests. This project is designed to be a “fully open source platform and aims to provide a common resource for learning scientists and educational software companies.” Using Carnegie Learning’s Upgrade, the Playpower Labs team found that adding “gamification” actually reduces learner engagement by 15 percent.
A secondary way that learning platforms can contribute to the field of learning engineering is by producing large shareable datasets. Sharing large datasets that have been anonymized (removed of all personally identifiable markers, to protect student privacy) is a big catalyst for progress in the field as a whole.
In the field of machine learning for image recognition, there is a ubiquitously used open-source dataset of more than 100,000 labeled images called “ImageNet”. The creation and open-source offering of this dataset has allowed researchers to build better and better machine learning image recognition algorithms thus catapulting the field of image recognition to a new higher standard. We need similar datasets in the field of education.
An example of this approach is the development of a dataset aimed at improving assisted feedback on writing. Called the “Feedback Prize,” this effort will build on the Automated Student Assessment Prize (ASAP) that occurred in 2012 and support educators in their efforts to give feedback to students on their writing.
To date, the project has developed a dataset of nearly 400,000 essays from more than half-dozen different platforms. The data are currently being annotated for discourse features (e.g., evidence, claims, etc) and will be released as part of a data science competition. More on the project here.
Another example of an organization that has created a shared dataset is CommonLit, which uses algorithms to determine the readability of texts. CommonLit has shared its corpus of 3,000 level-assessed reading passages for grades 6-12. This will allow researchers to create open-source readability formulas and applications.
For the Learning Engineering Tools Competition 2022, a dataset alone would not make a highly competitive proposal. Teams with a compelling dataset are encouraged to partner with a researcher or developer that will design a tool or an algorithm based on the data.
Check out this video series for a more detailed introduction to learning engineering. Or get an in-depth look at how one platform, ASSISTments, has instrumented for research, in this Ask Me Anything event with Neil Heffernan.
You can also join the Learning Engineering Google Group for news, upcoming events, and funding opportunities.
For further reading to learn more about learning engineering, see these articles: