1. Why AI-Generated Code Needs Security Auditing

AI coding assistants have fundamentally changed how software gets built. With Claude Code, Cursor, and GitHub Copilot, developers ship features in hours that used to take days. But this velocity comes with a hidden cost: AI models do not have a security mindset by default. They optimize for correctness and completeness, not for the threat landscape your application lives in.

The numbers confirm this. A 2025 study by Backslash Security found that over 40% of AI-generated code samples contained at least one OWASP Top 10 vulnerability when tested against real-world attack patterns. A separate analysis by Snyk reported that developers using AI coding tools introduced hardcoded credentials at twice the rate of developers writing code manually, because AI models frequently complete boilerplate with example values that never get replaced.

The problem is structural, not accidental. AI models are trained on public repositories — which means they are also trained on millions of lines of vulnerable code. When an AI autocompletes a SQL query, builds a JWT handler, or generates a file upload endpoint, it draws from patterns that include insecure implementations. Without an explicit security review step, those patterns go straight into your production codebase.

"AI writes code faster than any human. But speed without security is just technical debt with an expiration date you can't see."

Consider some real patterns that surface repeatedly in AI-generated code:

These are not edge cases. They are the default output of AI tools that were never explicitly instructed to prioritize security. The solution is not to stop using AI tools — it is to add a mandatory, automated security gate that catches these patterns before they merge.

2. The OWASP Top 10: Quick Reference

The OWASP Top 10 is the de facto standard for web application security. Published by the Open Web Application Security Project, it identifies the ten most critical security risks based on prevalence, exploitability, and business impact. Every professional security audit maps findings to these categories.

Here is how each category applies specifically to AI-generated codebases:

OWASP Category ID AI Code Risk Level Typical AI-Generated Pattern
Broken Access ControlA01CriticalMissing ownership checks on resource endpoints
Cryptographic FailuresA02CriticalMD5/SHA1 for passwords, hardcoded keys
InjectionA03CriticalSQL via string interpolation, command injection
Insecure DesignA04HighNo rate limiting, missing threat modeling
Security MisconfigurationA05HighDebug mode on, permissive CORS, verbose errors
Vulnerable ComponentsA06HighPinned to outdated dependency versions
Auth & Session FailuresA07CriticalJWT not verified, weak session tokens
Software Integrity FailuresA08MediumNo checksum validation on downloaded artifacts
Logging & Monitoring FailuresA09MediumNo audit trail for sensitive operations
Server-Side Request ForgeryA10HighUnvalidated URL parameters in HTTP client calls

Categories A01, A02, A03, and A07 represent the highest risk in AI-generated code because they involve patterns that AI models reproduce fluently — database queries, authentication flows, cryptographic operations — but that are frequently implemented without the defensive constraints that security engineers would apply by default.

3. Manual vs Automated OWASP Auditing

There is a persistent belief that security audits must be performed manually by a certified professional to be meaningful. This is true for penetration testing and red team exercises. But for code-level OWASP compliance in a continuous integration pipeline, manual review does not scale and does not execute consistently enough to be reliable.

Dimension Manual Audit Automated Audit (Don Cheli)
FrequencyOnce per release cycleEvery commit, every PR
CoverageDepends on reviewer's bandwidth100% of changed files, every run
SpeedDays to weeksUnder 2 minutes
ConsistencyVaries by reviewerDeterministic rule set
CostHigh (specialist time)Zero marginal cost per run
ActionabilityReport with recommendationsFile, line, severity, fix suggestion
IntegrationExternal, asyncNative quality gate, blocks merge
OWASP CoverageAll 10 (deep)All 10 (pattern-based, fast)
Best ForCompliance certification, pen testingDaily development, CI/CD enforcement

The right answer is both. Automated audits run on every commit and catch the common, high-frequency vulnerabilities. Manual audits run quarterly or before major releases and catch architectural-level issues that require human judgment. Automation does not replace security engineers — it frees them to focus on deeper work instead of reviewing string concatenation in SQL queries.

4. How Don Cheli's /dc:security-audit Works

The /dc:security-audit command in the Don Cheli SDD Framework is an automated OWASP audit that runs as a quality gate in Phase 6 of the development pipeline. It performs three sequential operations: scan, classify, and report.

Step 1: Scan

The audit engine traverses the codebase and collects all files modified in the current work session or PR scope. It applies a rule set of over 120 security patterns mapped to OWASP Top 10 categories. Rules are written as AST-aware matchers, not naive string searches, which eliminates the false positives that plague simpler tools. The scan runs against both the implementation files and the test files — because test code that handles credentials insecurely is itself a vulnerability vector.

Step 2: Classify

Each finding is classified across three dimensions:

Step 3: Report with Severity and Fix Suggestions

The audit generates a structured report that includes the exact file path, line number, the vulnerable code snippet, the OWASP category, severity, and a concrete, copy-paste-ready fix suggestion. This is the critical differentiator: most audit tools tell you what is wrong. Don Cheli tells you how to fix it, with code.

If any Critical or High severity findings are present, the quality gate blocks the phase transition. The pipeline will not advance to the next phase until the audit passes. This is enforced as an Iron Law — there is no override flag, no --skip-security option.

5. What the Audit Report Looks Like

Here is a representative example of the audit report output. This is the actual format produced by /dc:security-audit when run against a typical AI-generated REST API:

bash ===================================================== DON CHELI SDD — SECURITY AUDIT REPORT Scope: src/api/users.ts, src/lib/db.ts, src/auth/jwt.ts Rules applied: 124 | Files scanned: 3 | Duration: 1.4s ===================================================== CRITICAL (2 findings) ───────────────────────────────────────────────────── [C-001] A03:Injection — SQL Injection via String Interpolation File: src/lib/db.ts Line: 47 Code: `SELECT * FROM users WHERE email='${email}'` Fix: Use parameterized query: db.query('SELECT * FROM users WHERE email=$1', [email]) [C-002] A07:Auth — JWT Signature Never Verified File: src/auth/jwt.ts Line: 23 Code: jwt.decode(token) // decode, not verify Fix: Replace with: jwt.verify(token, process.env.JWT_SECRET) HIGH (1 finding) ───────────────────────────────────────────────────── [H-001] A05:Misconfiguration — CORS Wildcard in Production File: src/api/users.ts Line: 8 Code: app.use(cors({ origin: '*' })) Fix: Restrict to known origins: app.use(cors({ origin: process.env.ALLOWED_ORIGINS?.split(',') })) MEDIUM (2 findings) ───────────────────────────────────────────────────── [M-001] A09:Logging — Sensitive Field Logged to Console File: src/auth/jwt.ts Line: 31 Code: console.log('User payload:', payload) Fix: Remove or redact before logging: console.log('User authenticated:', payload.sub) [M-002] A01:Access Control — Missing Ownership Check File: src/api/users.ts Line: 62 Code: getUserById(req.params.id) // no auth check Fix: Verify requester owns resource: if (req.user.id !== req.params.id) return res.status(403).json(...) LOW (1 finding) ───────────────────────────────────────────────────── [L-001] A02:Crypto — Weak Hashing Algorithm File: src/lib/db.ts Line: 89 Code: crypto.createHash('md5').update(data) Fix: Use SHA-256 or bcrypt for passwords: crypto.createHash('sha256').update(data) ===================================================== SUMMARY: 2 Critical | 1 High | 2 Medium | 1 Low STATUS: ✗ BLOCKED — Resolve Critical + High before proceeding =====================================================

The report is both human-readable and machine-parseable. In CI environments it exits with a non-zero status code when Critical or High findings are present, allowing any pipeline orchestrator to treat it as a blocking check.

6. Integrating Security Audits into Your Pipeline

In the Don Cheli SDD Framework, /dc:security-audit is not an optional add-on you run when you remember. It is embedded as a mandatory quality gate at Phase 6 (Review), and it is also available as an on-demand command you can invoke at any phase.

The integration points are:

The result is a shift-left security model where vulnerabilities are caught at the earliest possible moment — before they accumulate, before they are compounded by dependent code, and before they ever approach production.

GitHub Actions Integration

yaml # .github/workflows/dc-security.yml name: Don Cheli Security Audit on: pull_request: branches: [main, develop] jobs: security-audit: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Don Cheli SDD run: npx don-cheli-sdd@latest ci-setup - name: Run OWASP Audit run: npx don-cheli-sdd audit --scope changed --fail-on high env: DC_AUDIT_FORMAT: sarif - name: Upload SARIF Results uses: github/codeql-action/upload-sarif@v3 with: sarif_file: dc-audit-results.sarif

The --fail-on high flag causes the workflow to fail (and block merge) if any High or Critical findings are detected. The SARIF output integrates directly with GitHub's Security tab, surfacing findings as code annotations on the pull request diff.

7. Beyond OWASP: The Full Security Layer

OWASP Top 10 coverage is the foundation, but the Don Cheli security layer extends beyond it. A production-grade security posture requires three additional dimensions that /dc:security-audit covers in its extended mode:

Permissions and Authorization Audit

Beyond detecting missing authorization checks (A01), the framework audits the permission model itself. It verifies that role definitions are exhaustive, that privilege escalation paths are blocked, and that administrative endpoints are protected by multi-factor patterns. This goes beyond pattern matching into semantic analysis of your access control design.

Secrets Detection

The framework integrates a secrets scanner that detects over 200 credential patterns: API keys, private keys, connection strings, OAuth tokens, and service account credentials. Unlike generic entropy scanners that produce high false-positive rates, the Don Cheli secrets detector uses provider-specific patterns (AWS ARN formats, Stripe key prefixes, etc.) combined with contextual signals to distinguish real secrets from test fixtures and example values.

Dependency Vulnerability Scanning

AI coding tools frequently generate package.json or requirements.txt files with dependency versions pinned to whatever was current in the training data — which may be months or years out of date. The framework's dependency scanner queries the OSV (Open Source Vulnerabilities) database on every audit run to flag dependencies with known CVEs, along with the patched version to upgrade to.

Together, these three layers — OWASP pattern audit, secrets detection, and dependency scanning — form a defense-in-depth approach to code security that is practically impossible to replicate at this frequency and consistency with manual review alone.

8. Getting Started with Automated Security Auditing

Running your first automated OWASP audit takes under three minutes. Here is the complete setup:

Installation

bash # Install Don Cheli SDD globally npx don-cheli-sdd init # Or via git clone git clone https://github.com/doncheli/don-cheli-sdd.git cd don-cheli-sdd && bash scripts/instalar.sh --global

Run Your First Security Audit

bash # Audit files changed in the current session /dc:security-audit # Audit the full codebase /dc:security-audit --scope full # Audit with secrets detection enabled /dc:security-audit --scope full --include-secrets # Audit with dependency scanning /dc:security-audit --scope full --include-deps # Full audit: OWASP + secrets + dependencies /dc:security-audit --scope full --include-secrets --include-deps # Output in JSON format for CI integration /dc:security-audit --scope full --format json --output dc-audit.json

Configure Your Quality Gate Threshold

json # In your dc.config.json { "security": { "audit": { "blockOn": ["critical", "high"], "warnOn": ["medium"], "includeSecrets": true, "includeDeps": true, "owaspCategories": "all" } } }

Once configured, the security audit becomes an invisible but non-negotiable part of your development loop. You will catch SQL injections, exposed credentials, and broken authentication on your local machine — in seconds — instead of discovering them in a post-incident review.

The framework is free, open source, and Apache 2.0 licensed. Security should not be a premium feature. It should be the default.

Stop shipping vulnerable AI-generated code.

Add automated OWASP auditing to your pipeline today with Don Cheli SDD. Zero configuration, instant results, every commit.

Get Started on GitHub →