With the rise of remote learning, online proctoring – used to ensure academic integrity during virtual exams – has become widely adopted by schools and universities across the U.S. These tools use methods like identity verification, video and audio monitoring, eye-tracking, and even AI-based behavioral analysis. As this technology proliferates, concerns about how such software collects, processes, and stores sensitive personal data have increased. One of the most controversial aspects of online proctoring is the use of AI to profile students, analyzing their behavior, gaze, and movements to flag suspicious activities. This automated decision-making can not only invade privacy but may also lead to unfair treatment, raising concerns about its legality under both European and U.S. privacy laws.

GDPR and privacy considerations with online proctoring

For U.S.-based proctoring services offering their tools to educational institutions in the European Economic Area (EEA), the General Data Protection Regulation (GDPR) is a crucial legal framework. Online proctoring might involve profiling. According to Art. 4 para. 4 of the GDPR profiling is defined as any automated processing of personal data to evaluate or predict personal aspects such as behaviour, reliability, or performance. When proctoring software uses profiling to assess a student’s likelihood of cheating, it falls under this definition.

According to Art. 22 GDPR the data subjects shall have the right not to be subject to a decision based solely on automated processing, including profiling, that can have legal or significant effects on them. If students are categorized by automated means or by an algorithm according to their probability of cheating this would lead to automated decision-making. In the context of online proctoring, this means students should not be judged solely by an AI system unless they’ve given explicit consent according to Art. 6 para. 1 lit. a GDPR. This is critical because profiling that affects a student’s academic outcome could produce significant consequences, requiring robust safeguards to ensure fairness.

Moreover, if the proctoring software processes special categories of data, such as biometric information (e.g., facial recognition or fingerprint scanning), explicit consent is required under Art. 9 para. 2 lit. a GDPR. Institutions must ensure students fully understand how their data will be used, processed, and stored, and they must provide options for opting out of such profiling.

US Federal Privacy Laws and Online Proctoring

In the U.S., online proctoring must comply with two major federal laws:

  • Family Educational Rights and Privacy Act (FERPA): FERPA ensures the confidentiality of student educational records. Under FERPA, educational institutions must obtain explicit written consent from students (or parents of minors) before sharing personally identifiable information, including data collected during proctored exams. Proctoring tools that record video, audio, or capture sensitive data must ensure they are using that data strictly for educational purposes.
  • Children’s Online Privacy Protection Act (COPPA): This law applies to children under 13 years of age and requires online services, including proctoring tools, to obtain parental consent before collecting personal information. This is particularly relevant in K-12 environments (primary and secondary education) where proctoring may be used.

Both FERPA and COPPA emphasize the need for informed consent, ensuring that student data is handled with care and used only for its intended purposes. Institutions using proctoring tools must strictly control who has access to this data and ensure it’s not misused or shared for non-educational purposes.

State-Specific Laws affecting online proctoring

In addition to federal laws, several states have implemented their own laws that directly impact the use of online proctoring tools. While these laws vary from state to state, they share common themes of consent, transparency, and restrictions on data use. Below are some key examples:

  • California: California leads the way in regulating online proctoring through its robust privacy laws.  Senate Bill (‚SB‘) 1172 for the Student Test Taker Privacy Protection, passed in 2022, specifically addresses student privacy in online exams. It prohibits proctoring companies from collecting, retaining, or sharing personal data beyond what is necessary to conduct the exam. Violations can result in significant fines, including 1,000 US dollars per incident. The California Privacy Rights Act (CPRA) requires businesses to disclose the logic behind automated decision-making processes and the likely outcomes for consumers, ensuring transparency in AI-driven proctoring tools. Additionally, California’s Student Online Personal Information Protection Act (SOPIPA) bans the use of student data for non-educational purposes, such as targeted advertising, and Assembly Bill No. 1584 allows schools to contract with FERPA-compliant third parties for managing student data.
  • Virginia: The Virginia Consumer Data Protection Act (VCDPA) mandates explicit consent for the processing of sensitive personal data, including biometric information. Importantly, it also gives consumers the right to opt out of automated decision-making, which could affect the use of AI in online proctoring.
  • Colorado: The Colorado Privacy Act, effective since July 2023, requires proctoring services to obtain consent before collecting biometric data. It also enforces rigorous standards for data protection, including periodic assessments of whether the storage of personal data is truly necessary for its intended purpose.
  • Connecticut: Similar to Colorado and Virginia the Connecticut Data Privacy Act (CTDPA), effective from July 2023, imposes strict consent requirements for biometric data processing. It also mandates that institutions conduct Data Protection Impact Assessments (DPIAs) for high-risk processes like AI profiling.
  • Montana and Texas: Both Montana (Montana Consumer Data Privacy Act, MTCDPA) and Texas (Texas Data Privacy and Security Act, TDPSA) with their new privacy laws will impose similar restrictions on the use of sensitive personal data in online proctoring, emphasizing consent, transparency and accountability in data handling.
  • Florida: Florida Digital Bill of Rights (FDBR) requires consent for processing sensitive data such as racial or ethnic origin and biometric data used for identification. While the FDBR lacks specific provisions on avoiding automated decisions, it does allow individuals to opt out of data processing for profiling that leads to significant legal or similar effects (Section 501.705(2.e)).

A clear trend is emerging across U.S. state privacy laws: the requirement for informed consent when processing sensitive personal data, particularly biometric data, and the emphasis on transparency and Data Protection Impact Assessments for high-risk processes. Lastly, several states (FDBR, CPRA, TDPSA, MTCDPA) also grant individuals the right to opt out of automated decision-making, which is crucial when AI is used to profile students during online exams.

Best Practices and Recommendations

To comply with U.S. legal requirements and protect student privacy, organizations interested in proctoring services should take into consideration the following best practices:

  1. Obtain Informed Consent: Explicit consent must be collected before processing any personal or biometric data. Consent should be obtained through clear and understandable notices, detailing how the data will be used, stored, and shared. Consent should also be easy to withdraw at any point.
  2. Limit Data Collection: Proctoring software should only capture data essential for monitoring the exam, such as recording the immediate work area and avoiding unnecessary data collection like room scans or unrelated audio/video surveillance.
  3. Implement Strong Data Security Protocols: Ensure that state-of-the-art security measures are in place as well as secure data storage practices to protect sensitive personal and biometric information from unauthorized access.
  4. Be Transparent About AI Usage: Clearly explain to students how AI is used in the proctoring process, including the logic behind any automated decisions and the potential outcomes. Where applicable, provide students with the option to opt out of AI-driven decision-making.
  5. Comply with Data Retention and Deletion Policies: Data should only be retained for as long as necessary to complete the exam process. Clear data deletion protocols should be established to ensure personal data is promptly removed once it is no longer needed.
  6. Conduct Data Protection Impact Assessments (DPIAs): For high-risk data processing activities, such as the use of AI for proctoring, DPIAs should be conducted to identify and mitigate potential privacy risks. These assessments can help ensure compliance with privacy regulations and reduce legal risks.
  7. Stay Updated on Evolving Legislation: As data privacy laws continue to evolve, proctoring providers must stay informed of changes and adjust their practices accordingly. Regular compliance reviews and legal updates are essential for maintaining adherence to state and federal laws.

The rapid growth of online proctoring in the U.S. has raised critical questions about the balance between maintaining academic integrity and respecting student privacy. From the GDPR’s strict regulations on profiling to U.S. federal and state laws, the legal landscape is becoming increasingly complex. Institutions that use online proctoring services must navigate these laws carefully, ensuring that students‘ personal data is protected, their consent is obtained, and AI systems do not unfairly impact their academic outcomes.