Learning management systems (LMSes) are the digital backbone of modern education. From Canvas and Moodle to Blackboard and Brightspace, these platforms organize coursework, deliver assessments, and manage grades. They are where the daily life of teaching and learning unfolds.
Now, a new class of AI tools is infiltrating these systems and threatening to undo their purpose. Agentic browsers are not like traditional generative AI tools that only produce text or images. They are designed to take action on the user’s behalf: Logging in, clicking, navigating, and completing multi-step tasks across web platforms. With a single prompt, they can move through an LMS to locate assignments, complete quizzes, and submit results. In some cases, they can even impersonate instructors by grading student work and posting feedback.
The Rise Of Agentic AI
The spread of agentic features in browsers is not evidence of deliberate design progress; it is evidence of a potentially harmful design flaw. By handing over control of the browser itself, these tools blur the boundary between assistance and automation. What begins as convenience—clicking through menus or filling forms—can quickly escalate into unauthorized access, credential exposure, and impersonation.
We are already seeing this escalation. Perplexity’s Comet was marketed as an “AI-powered browser” that could complete tasks across the web. Anthropic’s Claude-for-Chrome extension, still in testing, allows similar task execution directly inside a mainstream browser. Google’s brief rollout of a Chrome “Homework Help” button, later withdrawn after educator concerns, showed how easily such features slip into student hands. Microsoft’s Copilot Studio now makes it possible to embed autonomous actions into workflows, while OpenAI’s “Actions” feature lets ChatGPT operate directly inside third-party services.
These developments raise architecture-level implications. For example:
- Scaling agentic frameworks: Underlying agentic systems often rely on agent protocols like MCP (Model Context Protocol), which formalize how different tools and workflows connect. But those designs bring security challenges too. Audits show that protocols like MCP may let attacker-controlled components exploit open agentic workflows to exfiltrate data or execute malicious code.
- Agentic misuse measured empirically: A recent benchmark called CUAHarm shows that frontier agentic systems (computer-using agents) can successfully carry out harmful tasks—like disabling firewalls or leaking credentials—at alarming rates (e.g, 59% success in one model cited).
Inside the Classroom: A Real-World Test
Stavros Hadjisolomou, Associate Professor of Psychology at the American University of Kuwait, recently ran a controlled experiment on his Moodle site with the Claude AI Browser. The extension was able to “take control of the browser and act on the user’s requests, such as ‘We are on my course site on Moodle, find Module 1, go inside and find the MCQ test, and solve,’” shares Hadjisolomou. While the issue was at first flagged by the Claude browser as an ethical violation, that guardrail was easily bypassed. “By clearing the chat and asking it to solve a couple more times, it completed the requested tasks,” adds Hadjisolomou.
His experiment underscores a chilling reality: Even when one tool hesitates, persistence can overcomes the ethical guardrails. And even if universities block one browser, mainstream extensions could easily reopen the door.
Other faculty who tested agentic tools describe scenarios that border on the surreal:
- One professor watched as an agent logged into Canvas, graded multiple assignments, and posted written feedback under their name—without being asked to write feedback at all.
- Another reported the tool bypassing Duo two-factor authentication, entering a restricted site even though the human user had to manually approve access on their phone.
- Several pointed out that once a browser is duplicated, the agent may inherit all saved passwords and stored sessions—potentially reaching beyond the LMS into banking, email, or medical portals.
Taken together, these anecdotes paint a troubling picture: The risk is not just academic dishonesty, but full-scale credential and identity compromise.
Governance & Risk: Treat Agentic Browsers As An Enterprise Threat
The problem with agentic browsers is not confined to plagiarism or lazy shortcuts. These tools inherit saved credentials, slip into authenticated sessions, and—according to independent audits—open the door to prompt injection and phishing attacks. In effect, what looks like a homework hack quickly becomes an institutional security breach.
So where does the law stand? For now, nothing new has been written just for AI agents. FERPA still governs student education records, and the Department of Education’s Student Privacy Policy Office continues to enforce it. That means institutions must treat agentic tools like any other vendor system that touches student data: The responsibility for compliance lies squarely with them.
But federal attention is mounting. In July 2025, the Department issued a Dear Colleague Letter outlining principles for responsible AI use—funding allowability, transparency, and risk assessment among them. It wasn’t an enforcement action, but it was a warning shot: the Department expects schools to have frameworks in place.
Zoom out, and the picture gets even clearer. In 2024 alone, U.S. agencies introduced 59 new AI-related regulations, more than double the previous year. None of these are written specifically for LMSes, but the trend is unmistakable: The compliance bar for AI is rising.
The risks of agentic browsers also extend well beyond the classroom. Because these tools inherit saved credentials and authenticated sessions, they can move laterally into connected systems—student accounts, billing platforms, even financial aid portals. That’s where the Gramm–Leach–Bliley Act comes into play. Colleges that participate in Title IV federal aid are legally required to safeguard financial data under the GLBA Safeguards Rule. If an AI agent auto-fills forms, accesses aid records, or compromises a browser session tied to student financial information, the issue becomes a potential federal compliance failure that could put funding eligibility at risk.
Teaching & Learning Implications
This is not just a policy problem. The moment agentic AI can complete quizzes, post to discussion boards, and impersonate instructors, the architecture of learning shifts. Activity ceases being the path to mastery and becomes a performance for an invisible agent. In that world, students risk what some learning scientists call “vaporized learning”—tools that can boost short-term performance while eroding retention. A recent study showed that while AI tools like ChatGPT may temporarily boost test scores, they can undermine long-term learning and retention.
Faculty already feel the pressure. Many report reverting to analog, in-class tasks—oral defenses, handwritten whiteboard work, live reflections—because those cannot be outsourced. Others restructure assignments to emphasize process artifacts such as drafts, logs, reflections on difficulties, and peer commentary. The goal is to make work ownable again, so it resists delegation to any agent.
Call to Action
Agentic AI is not a distant concern. It is already embedded in browsers, already present in course shells, already executing tasks that once belonged to human learners and instructors. Because the risks cut across compliance, data security, and educational integrity, institutions can no longer afford hesitation.
The response must be immediate and decisive. Colleges and schools should:
- Block agentic browsers and extensions from LMS environments, pairing clear policy with technical safeguards.
- Redesign assessments to emphasize process, iteration, in-class demonstration, and artifacts that resist automation.
- Establish governance frameworks that classify agentic capability as an enterprise-level risk.
- Invest in AI literacy for faculty and students, embedding transparent boundaries into both curriculum and culture.
The stakes extend beyond regulatory compliance. If agentic AI is allowed to run unchecked in LMS platforms, the very conditions for authentic learning will erode.
The moment to act is now: Pause the use of agentic tools in these environments, build the scaffolding of governance and pedagogy, and only then re-introduce them in ways that strengthen rather than supplant education.