JavaScript security advice gets weird fast. One minute you are being told to never use eval, which is correct, and the next minute you are knee-deep in framework-specific caveats, browser APIs, package signatures, and half a dozen ways to accidentally turn user input into executable code. The problem is not that the guidance is wrong. The problem is that a lot of it is too abstract to survive contact with a real codebase.
The version that holds up in practice is simpler. If your JavaScript touches untrusted input, writes to the DOM, loads third-party code, or drags in a large dependency tree, you need controls that make unsafe paths harder to take by accident. Secure coding in JavaScript is less about memorizing one forbidden API and more about reducing the number of places where a bad assumption can become code execution.

The first place to get serious is DOM XSS. Google’s Trusted Types guidance calls DOM-based XSS one of the most common web security vulnerabilities because data from user-controlled sources can end up in dangerous sinks such as innerHTML, document.write, eval, setTimeout with string input, or new Function. That matters because modern frontend apps constantly move data between URL parameters, state objects, API responses, and client-side rendering code. If one developer treats a value as text and another later treats it as HTML, you have a problem that usually does not show up until someone pokes at it on purpose. Trusted Types are useful because they force teams to process data before it reaches dangerous sinks, and browsers will reject plain strings when those protections are enabled.
That does not mean you should just flip on Trusted Types and declare victory. The better habit is to reduce use of risky sinks in the first place. If you need to display text, use textContent. If you need to create elements, create elements. If you genuinely need to render HTML, sanitize it with a library that is built for that job. The web.dev Trusted Types article explicitly uses DOMPurify as an example. OWASP makes the same point from a different angle in its Cross Site Scripting Prevention Cheat Sheet: contextual output encoding and sanitization have to match the place where data lands. There is no universal “XSS fix” that works everywhere.
The second place teams get burned is third-party trust. Pulling a script from a CDN can be fine, but only if you behave like you are outsourcing execution inside your own origin, because that is effectively what you are doing. MDN’s Subresource Integrity documentation is very clear on the risk: if an attacker can modify a third-party hosted script, they can inject arbitrary content into every site that loads it. SRI helps by attaching a cryptographic hash to the script or stylesheet so the browser refuses to run a tampered file. In plain terms, it turns “I hope the CDN still serves what I expected” into “the browser will verify it.” That is a much better security boundary.

The same mindset applies to the package ecosystem. npm’s own docs say npm audit submits a description of your dependency tree to the registry to look for known vulnerabilities, and npm audit signatures can verify registry signatures for downloaded packages. Neither command is magic. They do not tell you whether a package is well maintained, whether a maintainer account has been socially engineered, or whether an unnecessary transitive dependency should exist at all. But they are still useful because they make supply-chain visibility normal instead of optional. A healthy JavaScript project should have a lockfile, should review dependency changes in pull requests, should pin major upgrades deliberately, and should have someone on the team who can answer the question, “Why do we need this package?” without guessing.
There is also still a place for the old warnings. MDN does not hedge on eval. It calls executing JavaScript from a string an enormous security risk because it becomes far too easy for a bad actor to run arbitrary code. The same logic applies to javascript: URLs, which MDN also discourages because they can trigger code execution and behave like fake navigation targets. In most codebases, if you still need either pattern, that is not a sign of cleverness. It is usually a sign that an earlier design decision should be revisited.
One thing security teams get wrong is turning Content Security Policy into a comfort blanket. CSP matters, but OWASP is explicit that CSP is not a primary defense against XSS and should not be your only line of protection. That is the right way to think about it. CSP is there to reduce blast radius and catch classes of mistakes. It is not a substitute for safe rendering, code review, sanitization, or dependency discipline.
The most useful JavaScript security habit is to make the safe path the default path. Use templates and frameworks that auto-escape by default. Treat every DOM write like a security decision. Keep third-party scripts on a short leash with integrity hashes. Audit and review dependency changes like code, because they are code. Remove string-to-code patterns unless you have a hard reason to keep them. If a teammate cannot explain why a sink is safe, the answer is that it probably is not.
Summary
Secure JavaScript is mostly about reducing the number of surprise execution paths in your app. Dangerous DOM sinks, loose third-party loading, and dependency sprawl are where the real trouble starts. Trusted Types, contextual output encoding, sanitization, SRI, and package-audit tooling all help, but only when they are part of a consistent habit of building for safety first. Boring code wins here. That is not glamorous, but it is how you keep small frontend mistakes from turning into incidents.
References