Michael Erquitt, Senior Security Engineer, Security Journey

Michael Erquitt discusses AI security risks in software development and why secure coding training must evolve alongside AI adoption to protect organisations from emerging threats.

Michael Erquitt, Senior Security Engineer, Security Journey

Today we're meeting Michael Erquitt, Senior Security Engineer at Security Journey.

The company specialises in application security education, helping developers and SDLC teams recognise, understand and proactively mitigate threats and vulnerabilities.

Over to you Michael - my questions are in bold:


Who are you, and what’s your background?

I am Michael Erquitt, a dedicated Senior Security Engineer at Security Journey, passionate about software and application security. My academic qualifications include a B.Sc. in Real Estate from the University of Central Florida, an MBA, M.Sc. in Finance from the University of San Diego, and an M.Sc. in Cybersecurity Engineering from the Shiley-Marcos School of Engineering.

My career began as a United States Army Special Forces Engineer, where I developed a strong foundation in security working across multiple domains in the special operations community. From there, I built on my expertise through international experiences, providing technical expertise for global supply chain acquisitions and researching threat intelligence for organisations in critical infrastructure and industry. Following this, I transitioned into blockchain and cryptographic security, working for Kudelski security and other cutting-edge blockchain startups.

What is your job title, and what are your general responsibilities?

As a Senior Security Engineer at Security Journey, my primary responsibility is to develop cutting-edge educational content for all our learners.

Can you give us an overview of how you’re using AI today?

AI adoption is taking place at an unprecedented rate, both at an organisational level and by software developers. As such, the industry and the way code is written is transforming and being rapidly introduced to a range of novel and evolving risks.

Security Journey aims to mitigate these risks, educating developers and everyone in the SDLC to shift left and build safer applications during development. We stand out because of our learner-first approach, which results in relevant, quality training that helps developers address real-world challenges such as enterprise AI implementation, vectors and embeddings and building AI agents. It goes beyond checking boxes and helps build a security-first mindset.

Tell us about your investment in AI? What’s your approach?

Our approach to AI is centred around continuous education and maintaining a strong security posture. AI, Machine Learning and Large Language Models offer numerous benefits that should be embraced. However, it is imperative the accompanying challenges are also addressed.

The future of software development is about striking the correct balance between developers and intelligent tools, proactively addressing issues such as complacency and vulnerability introduction with training. Whatever task AI is being utilised to complete, blind trust can cause problems, and therefore, human oversight and a foundation of strong security practices remain critical.

What prompted you to explore AI solutions? What specific problems were you trying to solve?

At Security Journey, we are committed to proactively addressing the risks of using generative AI to write code. Using AI tools securely is crucial to protect organisations and underscores the need for security education and training within development teams.

Consider AI agents, incredible for productivity yet risky, because if insecure actions occur during their assigned tasks, vulnerabilities can be introduced and the attack surface increases. Relying on AI to catch these AI-generated gaps and code errors is not a viable solution, because AI cannot understand context and specific business logic during a code review like humans can. Therefore, in an AI-driven world, the fundamentals of application security remain essential.

Who are the primary users of your AI systems, and what’s your measurement of success? Have you encountered any unexpected use cases or benefits?

Success is when an organisation has built a security-minded culture, with security practices instinctively embedded into every stage of the software development lifecycle. The primary users vary for different teams and the AI systems are specialised for the task, augmenting and speeding up velocity on the offering.

What has been your biggest learning or pivot moment in your AI journey?

One of the biggest learning moments in our AI journey was addressing the non-deterministic nature of the technology. This means that, depending on context and other factors, an AI model can return different outputs even if the input prompt is identical.

How do you address ethical considerations and responsible AI use in your organisation?

Responsible AI use is at the core of our work at Security Journey. We aim to equip developers and the SDLC team with the necessary skills to employ AI with minimal risk. The fundamentals of this are a seamless AI security policy and secure code training.

Organisations should define their unique risk rating and explicitly inform employees what tools they are authorised to use and how they can use them, including what data they can input. This must happen alongside secure code training, because it is secure code knowledge that stands between a developer being equipped to protect an organisation from a breach or leaving it open to vulnerabilities and attacks from AI use.

What skills or capabilities are you currently building in your team to prepare for the next phase of AI development?

I am currently looking to develop a culture of security amongst developers. This means going beyond tick-box awareness and one-off education programmes and viewing secure code training as a continuous process. Malicious AI agents are rapidly moving and evolving, and education must mirror this, ensuring teams are empowered with current knowledge.

A proactive security culture allows developers to integrate security principles throughout the software development lifecycle, significantly reducing the introduction of vulnerabilities.

If you had a magic wand, what one thing would you change about current AI technology, regulation, or adoption patterns?

To ensure successful AI adoption, I want to see organisations integrating secure code training in tandem. It is imperative that prospective AI tools undergo thorough testing by security teams and that developers are trained to write code securely. This should include extensive knowledge of common vulnerabilities and secure design principles.

However, the current landscape does not accommodate this, with not one of the top 50 undergraduate computer science programmes in the US requiring a course in secure code or application security for majors. I strongly recommend this change if we want to stay secure against an ever-evolving array of malicious agents.

What is your advice for other senior leaders evaluating their approach to using and implementing AI? What’s one thing you wish you had known before starting your AI journey?

Organisations should not shy away from utilising AI tools; however, maximising efficiency must come hand in hand with mitigating risk. At the core of this is a clear AI security policy, and the sooner the policy is implemented, the more effective it can be, because stopping bad habits from forming is far easier than breaking them.

A clear, accessible AI policy is the first line of defence for companies, especially given that employees at all levels are now utilising AI on a daily basis. Organisations must approach this with the correct attitude, not perceiving security policy as disruptive to workflows, but rather as a means of protecting themselves and their customers and discouraging individuals from bypassing guidelines to leverage AI innovation.

What AI tools or platforms do you personally use beyond your professional use cases?

Different AI tools are used for different purposes. If we are exploring a new feature or brainstorming implementation, we will use specific Large Language Models such as Claude, along with integrated IDEs to help ideate and prototype ideas quickly and see how and if they are a good fit for our architecture.

What’s the most impressive new AI product or service you’ve seen recently?

The most impressive AI product we have seen recently is the idea of reasoning-based agentic AI tools, which break down tasks into iterative steps and navigate the web for resources and information to best complete tasks, with the option to observe what the “computer” is doing. It can be resource-intensive and incorrect, but with the necessary context for the task and a reasoning-based agent system, it provides a powerful tool to speed up research in certain tasks.

Finally, let’s talk predictions. What trends do you think will define the next 12–18 months in the AI technology sector, particularly for your industry?

Within the realm of secure code training, we have already shifted to include AI within security content, and this is forecast to continue. While we encourage developers to leverage LLMs and AI tools, we recognise that AI has the potential to both enhance and compromise security. That’s why we focus on helping individuals use tools securely, with a clear understanding of the risks associated with AI-generated code.

Looking ahead, we anticipate a rise in AI-enhanced attacks, and we’re excited to create engaging, educational content that explores and demystifies these emerging threats.

From a broader market perspective, in the next 10 years we will have even more instability than we currently do. The winners will be those not necessarily diving into the deep end with each new release, but also not ignoring them.

For instance, with pen testing there will soon come a day when AI can enumerate and threat map a target faster and better than a human can. However, it will need to hand off problems it can’t solve to us. So, if anything, this is a call for everyone to start specialising in knowledge not readily available online. I see our new role as humans being to fill the gaps where AI underperforms. That’s where the jobs will be, and that’s where the market is shifting.


Thank you Michael. Connect with Michael on LinkedIn and read more about Security Journey at their website.