Frequently Asked Questions

Eligibility

Please refer to the Official Rules. All participants must agree to these rules to compete.

Yes! The Tools Competition is eager to hear from participants from across the globe. Participants must be able to accept funds from US based entities.

Individuals and entities residing in Cuba, Iran, North Korea, Russia, Sudan, and Syria are not eligible to participate in the competition.

Yes, proposals must be in English.

Yes! We are eager to hear and support individuals who are new to the field. We encourage you to compete in the Catalyst award level to be more competitive. Please see more information below on award levels and take the eligibility quiz for more guidance.

Yes! Anyone 18 years or older is eligible, and we are eager to hear from people at all stages of the development process.
Yes! We encourage you to submit a proposal and make a note of your conflict.

Developing successful proposals

Submissions for the 2022 Tools Competition are open through November 20, 2022. You can read more about our submission process and how to compete here.

The Tools Competition seeks to spur new tools and technology. This means that something about the proposal needs to be fresh, innovative, or original. This does not mean you have to create a new tool or new platform.

Proposals seeking a Growth Phase or Transform Phase award must build off an existing platform of varying levels of development and scale. This might be an API that will improve the platform or a new tool to improve effectiveness. Or it could mean adding infrastructure that allows external researchers to access your data.

See more about award levels and eligibility requirements here.

The competition is open to solutions for Pre-K to secondary learners.

The competition has four ‘tracks’ or priority areas that reflect the pressing needs and opportunities in education. Competitors will be required to select one of the tracks in which their submission will be primarily evaluated. Competitors can also select a secondary track.

The competition tracks include:

  • Transforming assessments to collect new measures, drive quality and reduce cost.
  • Strengthening teacher development and support.
  • Facilitating faster, better, and cheaper learning science research.
  • Accelerating learning for all.

Each track has somewhat different requirements and eligibility criteria and certain tracks may be more or less competitive than others depending on final funding allocation and the number of competitors in each track. Tracks may also have different total prize purses, depending on sponsor priorities.

See more on each track here.

The Phase I submission form will ask you to select a primary track and a secondary track. Your primary track should be the track that is the best match.

If competition organizers invite you to Phase II, they will carefully review the proposal to confirm your track or recommend a new one.

Consider the following recommendations:

  • Focus on the two competitive priorities: (1) a competitive priority for math-focused proposals across the Accelerating Learning for all, Transforming Assessments, and Learning Science Research tracks; and (2) a competitive priority for proposals in the Assessment track that include non-academic measures.
  • Prioritize your tool’s alignment to learning engineering principles. See more on learning engineering below.
  • Incorporate the need and demand of learners, families, and educators into the design and development of the tool. See more on this below.
  • For Growth or Transform Phase competitors in the Learning Science Research track, there is a supplemental award of $100,000 available for proposals that include a district partnership. See more below.

Award Levels & Budget

The competition will award three award levels:
  • Catalyst ($50,000): aimed at new competitors, including students, teachers, civic technologists, or those who need that initial spark of support to get started.
  • Growth ($100,000): for teams that have a minimum viable product upon which their new idea will build and some users.
  • Transform ($250,000): for teams with an established platform with more than 10,000 users upon which the new idea will build.

Growth or Transform Phase competitors in the Learning Science Research track are eligible to receive a supplemental award of $100,000 for district partnerships, if the district has at least 10,000 students of which the majority come from historically marginalized populations. See more below.

Complete the eligibility quiz to determine which award level best fits your proposal.

No, you are not required to compete in the Transform award level. You are welcome to compete for a lower award level if you believe that the idea is in an earlier stage of development.

Growth or Transform Phase competitors in the Learning Science Research track are eligible to receive a supplemental award of $100,000 for partnerships with a district or consortium of districts with at least 10,000 students of which the majority come from historically marginalized populations. The partnership must include:

  • co-design of research questions
  • data collection from at least 3,000 students by Year 2
  • a strategy to incorporate the findings from the research into district instruction or program

If you are entering the competition as a district or consortium of districts, you are also eligible to compete, as long as you are partnered with a researcher.

Refer to the Official Rules for full eligibility requirements.

Proposals will be evaluated based on whether they are clear, concise, actionable, and attainable, with budgets that are aligned and realistic with what’s being proposed. Judges will evaluate how you will maximize your impact.

Indirect costs must not exceed 10 percent of the total budget. Other than that, there are no specific requirements on what costs are allowed or not allowed (within reason, of course).

There is no definitive time period for the award. It is recommended that awarded proposals demonstrate significant progress by Product Review Day in Fall 2023 to receive the second installment of funds. This progress will be measured against the timeline for execution outlined in the proposal.

Competition Tracks

For this track, we are looking for tools that both capture traditionally unmeasured elements of learning and development and improve the quality of assessments to better meet the needs of educators, students and families while reducing the time to develop, administer, or interpret them. All forms of assessment – diagnostic, formative, summative, direct-to-family – are eligible.

This year’s competition is especially focused on new ideas that focus on one or multiple of the following areas:

  • Non-academic measures. Tools that evaluate non-academic measures, including social emotional learning – relationships with adults and peers, emotional functioning, sense of identity, etc. – or approaches to learning – emotional and behavioral self-regulation, initiative, creativity, grit, etc. Many of these measures are “unconstrained” or developed gradually and without a ‘ceiling.’ This will influence the way the tool evaluates and helps users interpret progress. An example might be a tool that detects emotions through facial recognition. A subset of the overall award money for the assessment track will be reserved for proposals that identify non-academic measures for pre-K or pre-literate children, specifically.
  • Math performance.. Tools that capture performance related to math across all grade levels from number sense to advanced arithmetic expressions to data science. As an example, consider a 2021 winner, M-Powering teachers, that uses NLP to evaluate student mathematical reasoning.
  • Stealth assessments. Many academic and non-academic measures can be effectively evaluated through stealth assessments. Many of the ideas listed above for non-academic and math performance tools would constitute as stealth assessments. Other examples would include a tool that evaluates motivation and growth mindset by monitoring response time and error rate on digital learning platforms; or 2021 winner University of Wisconsin-Madison that is creating a suite of games to measure student progress on various academic domains.

For examples of other promising innovations in assessment, review last year’s Assessment Track winners.

For this track, we are looking for tools that cultivate or support prospective, developing, and established teachers to improve their practice and maximize learning for all. Tools that support teacher retention, satisfaction, and effectiveness across schools are encouraged.

Just as technology has the potential to personalize and improve learning for Pre-K to secondary students, the same is true for adults. Schools of education, school districts, and other teacher development entities can leverage tools to prepare educators for the classroom as well as offer data and feedback to inform educators’ instructional decisions or improve practice.

As an example, consider Teaching Lab Plus, a 2020 Tools Competition winner, that will collect effectiveness data on professional learning programs in order to improve current programs. Or, a simulation that allows teacher candidates to practice how they would respond to difficult moments in a classroom and receive real-time feedback.

For this track, we are looking for tools that accelerate the learning science research process in order to improve learning interventions. Tools may facilitate A/B testing and randomized controlled trials, improve research design, promote replication, or release knowledge and data for external research. This year, there is a competitive priority for proposals that directly address or could be applied to math instruction.

Please review last year’s winners for examples of competitive proposals in the Learning Science Research track.

The competition is eager to promote tools that are developed in consultation with practitioners. As a result, this year, Growth or Transform Phase competitors in the Learning Science Research track are eligible to receive a supplemental award of $100,000 if they partner with a district or consortium of districts with at least 10,000 students of which the majority come from historically marginalized populations. The district partners would co-design research questions, implement the tool with at least 3,000 students, and incorporate the research findings into district instruction or policy.

For this track, we’re looking for tools that accelerate outcomes in literacy and math and increase relevance of instruction to prepare students for college and careers. Tools should have an equity focus, addressing the declines in academic progress across different races, ethnicities, socioeconomic groups, geographies and disability statuses. The competition also aims to support making knowledge and skills more relevant.

Please review last year’s winners for examples of competitive proposals in the K-12 Accelerated Learning track.

The Tools Competition has a phased selection process in order to give competitors time and feedback to strengthen their tool and build a team. Proposals will be reviewed at each phase and selected submissions will be invited to submit to the next round.

For more information refer to our How to Compete page.

Proposals will be evaluated against others within the same track. Proposals at higher award levels will be subject to greater scrutiny. At each stage of the competition, reviewers will evaluate proposals based on eligibility requirements for the award level as well as:

  • Potential impact and likelihood to improve learning
  • Attention to equity to support learning of historically marginalized populations
  • Demand from learners, educators, and families
  • Ability to support rapid experimentation and continuous improvement
  • Ability to scale to additional users and/or domains
  • Team passion, and readiness to execute

For more information on eligibility criteria, refer to the Official Rules.

Yes! Before the November 20th deadline, the competition organizers will host two informational webinars. Webinars are scheduled for October 11 & October 20.

Interested competitors are also welcome to reach out to ToolsCompetition@the-learning-agency.com with questions or feedback.

Additional avenues for support, including 1:1 feedback calls and office hours, will be emailed out to our email list, so please make sure to sign up for updates here.

We also recommend joining the Learning Engineering Google Group. Opportunities for partnership and additional support are also frequently posted there.

Research Partnerships

Growth or Transform competitors in the Accelerated Learning, Assessment, or Strengthening Teacher Development tracks are required to either (1) identify an external researcher that has agreed to partner on the project, or (2) provide evidence from multiple external researchers that the tool could enable research.
External researchers must be external to the immediate organization that is receiving the funds, but they may work for the same institution in another department.

If you need help identifying a researcher, please reach out to Toolscompetition@the-learning-agency.com. We have a large and growing network of researchers who can assist platforms with:

  1. How best to instrument a platform in ways that would serve the field,
  2. Determining what data a platform is able to collect and how best to collect it,
  3. Using the data and related research to answer questions of interest.

We can facilitate connections to researchers through individual requests or broader networking listservs and events.

You can include costs for external researchers, but ideally, your tool allows multiple researchers to leverage the data. Given that, your budget should cover establishing the infrastructure to allow external researchers to access your data. We anticipate interested researchers will be able to fundraise to conduct research using your data.

Competitors seeking a Growth Phase or Transform Phase Award must have commitment from one or more external researchers that they are interested in using the data from their platform by the time they submit their detailed proposal for Phase 2, which is due February 19th, 2023.

This does not need to be a formal agreement, and the researcher does not need to have already secured funding. Instead, we want to see that you have started forming partnerships with external researchers to share your data and consider how that will require you to adapt your tool.

Most importantly, the tool must be designed so that multiple researchers can access data from the platform over time. Given this, we assume that if the researcher you are working with falls through for a reason, you will be able to establish another partnership quickly.

Incorporating demand from learners, families, and educators

Competition organizers will evaluate proposals based on their commitment and plans to integrate perspectives and feedback from the stakeholders who have the most to benefit: students (or learners), families, and educators.

Digital learning technologies touch many different stakeholders. Developers create new technologies or platforms. Buyers (many times procurement officers in public entities that oversee education systems or schools) evaluate the merit of different platforms and select which ones should be made available to various schools. Researchers leverage data from digital learning platforms to better understand what is effective for learners.

While all of these stakeholders are critical in the edtech landscape, the stakeholders benefiting from or using the technologies – students, families or educators – have the most at stake in the design, development and implementation of new tools and interventions. But they are often the most overlooked.

Students’ input is critical across all interventions, as they are the ultimate beneficiaries, even of tools directed towards families and educators. Families and educators are key, as they provide valuable insights into students’ learning process, and in many cases are needed to implement the tools with learners. Given this, it's especially important to seek out these stakeholders' perspectives and input is critical at every stage of development of new innovations.

The Tools Competition encourages competitors to solicit and incorporate input from students, families and educators throughout the design, development and implementation of their new tool.

Explicitly, no. Many tools will be specifically designed to support only one of those groups. That said, the experiences and needs of students, families, and educators are closely related. As a result, it is helpful to engage and receive feedback from all three groups when designing, developing and implementing a new tool or intervention.

For example, 2021 winner M-Powering Teachers , which uses natural language processing to analyze how math teachers instruct and interact with students, is designed to provide educators with actionable feedback in order to improve their practice. Developers must pay particular attention to input from educators in order to tailor the content and representation of feedback so it is welcome and actionable; however, in order to provide meaningful feedback to educators, the tool should also incorporate research and student perspective on effective student-centered learning. This can empower teachers with the capacity to better differentiate for the unique needs of learners and encourage autonomy in how students direct their own learning paths.

2020 winner Springboard Collaborative designed a direct-to-family tool that allows caregivers to assess their child’s reading level. For this tool, caregivers are the direct user and their experience with the tool is most likely to drive overall impact. Yet, in order to maximize the value of this assessment, the tool should closely mirror the types of assessments educators will administer in school and the language educators will use to discuss student performance with families.

The competition will assess the extent to which the tool addresses a clear need demonstrated by the stakeholders that will directly use the tool. It will also evaluate the likelihood that learners, families or educators will use data and insights generated by the tool to improve outcomes.

In other words, proposals should address the following questions:

  • Is the problem this solution seeks to address of critical importance to students, families, or educators? Why? How do you know?
  • How have you or how do you plan to solicit feedback from students, families and educators in the design and implementation of your tool?
  • During the design, development, and implementation process how will you leverage the input from those meant to benefit from or use the tool? What decision points will that feedback inform?

Regardless of team’s size, everyone can begin to engage learners, families, and educators in their design. Start small, even if you are struggling to find stakeholders to engage or are unsure how best to tailor questions to inform your strategy. Consider:

  • Leveraging insights from publicly available studies of students, teachers, or families
  • Talking to those in your network to uncover insights and make connections to users you can engage for feedback
  • Organizing focus groups with small groups of users to understand demand and understand usability during your development and implementation process

You may encounter certain challenges as you aim to incorporate demand from learners, families, and educators. For instance, it can be hard to access diverse groups of students, educators and families to ensure a representative sample of feedback. Also, for tools that are supported especially by young students, it can be difficult to design questions to get meaningful feedback or even get permission to address them.

We are here to support your team in thinking through your approach to incorporating the perspective of those ultimately benefiting from your tool. Reach out to ToolsCompetition@the-learning-agency.com for support.

What happens after the competition?

Winners will receive their award by check or bank transfer in two installments.

Winners will receive the first installment soon after winning. Winners will receive the second installment of the award after Product Review Day if they are making sufficient progress on the plan they outlined in their Phase 2 proposal.

Winners will present during a virtual Product Review Day to their peers and others in the field to get feedback and perspective on their progress.

Approximately one year after winners are notified, winners will convene again to present their progress in a Demo Day.

Yes! We strive to support all competitors, not just winners. At each phase, the organizers will compile lists of opportunities for additional funding, support, and partnership.

We also encourage your team, if not selected, to stay in touch with the organizers through ToolsCompetition@the-learning-agency.com and the Learning Engineering Google Group.

Competition organizers are eager to support winners and learn from their work to inform future resources for competitors and winners. To do so, all winners will participate in an impact study during which research advisors will work with you to incorporate new measures into your internal evaluation process. In addition, all winners will complete two surveys each year for 3-5 years after winning. That will include completing two surveys annually.

Learning Engineering

In the Tools Competition, learning engineering is defined as the use of computer science to pursue rapid experimentation and continuous improvement with the goal of improving student outcomes.

The learning engineering approach is critical because the current process to test and establish the efficacy of new ideas is too long and too expensive. Learning science research remains slow, small-scale, and data-poor, compared to other fields. The result is that teachers and administrators often have neither proven tools nor the research at hand they need to make informed pedagogical decisions. Learning engineering aims to solve this problem using the tools of computer science.

For individual platforms, the learning engineering approach is important because it allows for platforms to engage in rapid experimentation and continuous improvement. In other words, learning engineering allows for platforms to quickly understand if an approach works and for whom and at what time. This is central to scaling an effective product and generating high quality data.

Far too often, education research proves to be a frustrating process. Experiments often take years. Costs are high, sometimes many millions of dollars per study. Quality is also inconsistent, and many studies have small ‘n’ sizes and lack rigorous control. Similarly, the field lacks high-quality datasets that can spark better research and richer understanding of student learning.

Part of the issue is that learning is a complicated domain that takes place in highly varied contexts. Another issue is that the subjects of the studies are typically young people and so there are heightened concerns around privacy.

But the consequences of weak research processes are clear, and in education, experts often don't know much about what works, why it works, for whom it works, and in what contexts.

Take the example of interleaved practice, or mixing up problem sets while learning. Research into middle school math has established that students learn better when their practice is interleaved, meaning students practice a mix of new concepts and concepts from earlier lessons. But it’s an open research question how far this principle extends. Does interleaved practice work equally well for reading comprehension or social studies? Does it work for younger math students too? Does the type of student (high-achieving versus behind) matter?

This lack of knowledge has important consequences, and far too much money, time, and energy is wasted on unproven educational theories and strategies.

Learning engineering, at its core, is really about three processes:

  1. systematically collecting data as users interact with a platform, tool, or procedure while protecting student privacy
  2. analyzing the collected data to make more educated guesses about what’s leading to better learning, and
  3. iterating based on these data to improve the platform, tool, or procedure for better learning outcomes.

Some but not all platforms will partner with researchers to better learn what’s working best for students. These findings can then be shared with the community at large to help improve learner outcomes everywhere.

Consider these questions:

Does your platform allow external researchers to run science of learning studies within your platform?

If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

Does your platform allow external researchers to mine data within your platform to better understand the science of learning?

If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

If the answer to either of the above questions is “no,” then we highly recommend that you partner with a researcher to help you think through how to begin to instrument your platform as part of the Tools Competition.

See more below for how to instrument your platform.

Instrumentation is building out a digital learning platform so many external researchers can engage in research. To be more exact, the platform is offering its data as an “instrument” to do research. In this sense, instrumentation is central to learning engineering; it is the process by which a platform turns their data into a research tool.

One primary way to instrument is by building a way for external researchers to run A/B experiments. Several platforms have created systems that allow external researchers to run their research trials on digital platforms. In other words, the platforms have “opened up” their platforms to external researchers. These platforms facilitate large-scale A/B trials and offer open-source trial tools, as well as tools that teachers themselves can use to conduct their own experiments.

When it comes to building A/B instrumentation within a platform, the process usually begins with identifying key data flows and ways in which there could be splits within the system. Platforms will also have to address issues of consent, privacy, and sample size. For instance, the average classroom does not provide a large enough sample size, and so platforms will need to think about ways to coordinate across classrooms. A number of platforms have also found success building “templates” to make it easier for researchers to run studies at scale.

One example of this approach is the ETRIALS testbed created by the ASSISTments team. As co-founder Neil Heffernan has argued, ETRIALS “allows researchers to examine basic learning interventions by embedding RCTs within students’ classwork and homework assignments. The shared infrastructure combines student-level randomization of content with detailed log files of student- and class-level features to help researchers estimate treatment effects and understand the contexts within which interventions work.”

To date, the ETRIALS tool has been used by almost two dozen researchers to conduct more than 100 studies, and these studies have yielded useful insights into student learning. For example, Neil Heffernan has shown that crowdsourcing “hints” from teachers has a statistically significant positive effect on student outcomes. The platform is currently expanding to increase the number of researchers by a factor ten over the next three years.

Other examples of platforms that have “opened up” in this way include Canvas, Zearn, and Carnegie Learning.

Carnegie Learning created the powerful Upgrade tool to help ed tech platforms conduct A/B tests. This project is designed to be a “fully open source platform and aims to provide a common resource for learning scientists and educational software companies.” Using Carnegie Learning’s Upgrade, the Playpower Labs team found that adding “gamification” actually reduces learner engagement by 15 percent.

A secondary way that learning platforms can contribute to the field of learning engineering is by producing large shareable datasets. Sharing large datasets that have been anonymized (removed of all personally identifiable markers, to protect student privacy) is a big catalyst for progress in the field as a whole.

In the field of machine learning for image recognition, there is a ubiquitously used open-source dataset of more than 100,000 labeled images called “ImageNet”. The creation and open-source offering of this dataset has allowed researchers to build better and better machine learning image recognition algorithms thus catapulting the field of image recognition to a new higher standard. We need similar datasets in the field of education.

An example of this approach is the development of a dataset aimed at improving assisted feedback on writing. Called the “Feedback Prize,” this effort will build on the Automated Student Assessment Prize (ASAP) that occurred in 2012 and support educators in their efforts to give feedback to students on their writing.

To date, the project has developed a dataset of nearly 400,000 essays from more than half-dozen different platforms. The data are currently being annotated for discourse features (e.g., evidence, claims, etc) and will be released as part of a data science competition. More on the project here.

Another example of an organization that has created a shared dataset is CommonLit, which uses algorithms to determine the readability of texts. CommonLit has shared its corpus of 3,000 level-assessed reading passages for grades 6-12. This will allow researchers to create open-source readability formulas and applications.

For the Learning Engineering Tools Competition 2022, a dataset alone would not make a highly competitive proposal. Teams with a compelling dataset are encouraged to partner with a researcher or developer that will design a tool or an algorithm based on the data.

 
 
 
 

SPONSORED BY

 
 
 

“Bill & Melinda Gates Foundation” is a registered trademark of the Bill & Melinda Gates Foundation in the United States and is used with permission.