AI employees security fears are no longer science fiction — they’re now on the 12-month horizon, according to Anthropic’s chief information security officer. In a recent interview, CISO Brian Carpenter said that AI-powered agents capable of remembering past interactions and managing internal passwords may be deployed inside companies as soon as next year.
“We’re not talking about assistants anymore,” Carpenter warned. “We’re talking about persistent digital teammates with access to sensitive systems, capable of reasoning over time.”
As seen in Millionaire MNL, these AI “employees” won’t just schedule meetings or generate drafts. They’ll remember your previous preferences, hold context over weeks, and potentially access proprietary data — raising urgent questions about security, compliance, and governance.
From chatbot to colleague
Anthropic, the company behind Claude AI, is among the leaders racing to push boundaries on agentic AI — systems that don’t just respond, but act semi-independently in complex environments. Carpenter says the next generation of AI tools will have memory, personality shaping, and tiered access control.
What makes this evolution different is not just functionality — it’s autonomy. These tools could be configured to handle tasks like onboarding new hires, executing vendor payments, or managing email traffic with minimal human oversight.
That’s why AI employees security is becoming a priority at the top of enterprise IT discussions.
Who holds the keys — and the history?
With memory comes risk. Unlike current AI models that operate statelessly, AI agents with memory will accumulate data — potentially remembering past conversations, workflows, or even login credentials.
According to Carpenter, this is where the danger lies. “If an AI agent goes rogue or is compromised, the blast radius is much bigger,” he explained. “You’re not just losing one email. You’re exposing entire decision histories, internal tools, or financial accounts.”
As mentioned by Millionaire MNL, several startups are already testing lightweight memory features in pilot enterprise deployments. The real challenge, Carpenter said, will be in creating auditability, revocation systems, and ethical boundaries that companies can trust.
Security isn’t optional — it’s architectural
Carpenter emphasized that companies must think differently about AI employees. This isn’t another SaaS tool or dashboard plugin. These agents will sit inside org charts, collaborate across teams, and shape decisions.
He advocates for building AI systems with security-first design — including real-time monitoring, behavior-based alerts, and compartmentalized access.
“We can’t bolt on security after these agents are deployed,” he said. “We have to build it in from the beginning.”
Carpenter’s advice to executives: prepare for a shift in internal structures. Assign responsibility for AI oversight, update your risk models, and treat these systems as you would a new team of contractors — with permissions, onboarding protocols, and exit procedures.