
AI Security Engineer
capital.comLocation
Not specified
Salary
Not specified
Posted
1w ago
Job Type
Full Time
About the Role
We are looking for an AI Security Engineer to secure our AI-driven systems, including LLM-based applications, machine learning models, and AI-enabled automation tools.
This role will focus on identifying, assessing, and mitigating security risks across the AI lifecycle — from model development and training to deployment and runtime monitoring.
The ideal candidate combines strong security engineering experience with a deep understanding of machine learning systems and emerging AI-specific threats (e.g., prompt injection, model poisoning, data leakage, adversarial attacks).
- Design and implement security controls for AI/ML systems across development, training, and production.
- Secure LLM integrations, RAG pipelines, and AI APIs.
- Conduct threat modeling for AI systems and data pipelines.
- Define secure-by-design patterns for AI-powered features.
AI Threat Detection & Mitigation
- Identify and mitigate AI-specific threats: prompt injection and jailbreak techniques, model poisoning and data contamination, adversarial attacks, training data leakage, insecure model serialization, excessive permissions in AI agents.
- Develop guardrails, content filters, and output validation mechanisms.
- Implement monitoring for anomalous AI behavior.
Secure Development & DevSecOps
- Integrate AI security checks into CI/CD pipelines.
- Perform security reviews of ML code and AI-related infrastructure.
- Secure model registries and artifact storage.
- Collaborate with other engineers and platform teams to enforce security standards.
Data Protection & Compliance
- Ensure AI systems comply with: GDPR and data privacy regulations, financial industry regulatory requirements, implement controls for sensitive data used in training and inference, perform AI risk assessments aligned with internal risk methodology.
Governance & Policy
- Contribute to AI security standards and internal policies.
- Define AI risk classification and control frameworks.
- Support security reviews for new AI initiatives / tools.
- 3–5+ years in software engineering, ML engineering, or application security
- Hands-on experience with AI/ML systems — LLMs, NLP models, or similar.
- Python proficiency for automation and scripting.
- Experience working with Claude Code.
- Strong understanding of cloud platforms: AWS, Azure, or GCP
- Experience with API security, Docker, Kubernetes
- Knowledge of AI-specific security risks and mitigations
- Experience conducting threat modeling and risk assessments.
- Familiarity with RAG architectures, vector databases, ML pipelines (MLflow, Kubeflow, SageMaker).
- Experience in fintech or regulated environments.
- Knowledge of AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Experience with AI red teaming.
- Background in cybersecurity or application security (OWASP, Secure SDLC).
- Strong analytical and problem-solving skills.
- Ability to translate technical risk into business impact.
- Able to explain AI security risks and mitigations to non-security teams.
- Cross-functional collaboration with ML, data, and product teams.
- Clear documentation and communication skills.
Be a key player at the forefront of the digital assets movement, propelling your career to new heights! Join a dynamic and rapidly expanding company that values and rewards talent, initiative, and creativity. Work alongside one of the most brilliant teams in the industry.
About capital.com
capital.com is hiring for this full time position. Visit the job listing to learn more about the company and apply.