Embracing the Evolution: AI, Engineering, and Software Security

There has been a lot of talk recently about how artificial intelligence (AI) will transform our lives. While AI as a concept and in practice is not new (think auto-complete and chatbots), there have been some significant advances in recent years leading many individuals and organizations to try and understand how they will leverage this powerful aid. Organizations are especially looking at how AI can potentially change the way that software is developed. Integrating AI into the software development lifecycle (SDLC) has the power to enhance efficiency and unlock new opportunities by allowing the developer to be more productive by offloading menial tasks to the AI.

What is AI

The term AI is broad and can encompass many different technologies and components. Here are some basic terms related to AI.

Artificial Intelligence (AI) refers to the development in computer systems for performing tasks that typically require human intelligence, such as understanding natural language, recognizing images, and making decisions. AI encompasses a range of techniques and approaches, including machine learning, deep learning, natural language processing, and computer vision.
Conversational AI is a type of technology that is used to enable computers or machines to understand, interpret, and respond to human language. This technology allows machines to interact with humans in a natural, human-like way, using voice, text, or other forms of communication.

A Large Language Model (LLM) is a type of AI system that has been trained on vast amounts of text data to generate natural language responses to a given prompt or query. These models use advanced machine learning techniques, such as deep neural networks, to learn the patterns and structure of human language and generate responses that are often indistinguishable from those written by humans.

Generative AI can create or generate new content, such as images, music, or text, that has not been explicitly programmed into the system. Unlike traditional AI models that rely on pre-existing data to make predictions or decisions, generative AI can produce original content by learning patterns and relationships in data and using that information to create something new. Generative AI typically uses deep learning models, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), to learn and generate new content.


Generative Pre-Trained Transformer (GPT) refers to a deep learning algorithm architecture that is used in the ChatGPT language model. The GPT architecture is specifically designed to generate coherent and contextually relevant language, making it well-suited for natural language processing tasks like chatbot conversations, language translation, and text summarization.


We’re accustomed to seeing these technologies being used in conversations with chatbots, developing images and videos, or automated assistants and autocomplete. But what about using AI in software development? Developing software is a complex and resource-intensive endeavor, involving teams comprised of architects, developers, testers, operations, and security personnel. Each of these skilled individuals play a key role in turning an idea into running code. AI can be used as a member of that development team capable of automating or assisting with various tasks that are repetitive or require less finesse by a human. For example, AI can be used to assist with code generation by looking at the current code base and writing snippets or entire modules based on previous code development. These snippets can be requested, reviewed, and then implemented by a software engineer as they build their application. This is not too far removed from developers using third-party software or coding sites for code snippets that can be integrated into the software. However, using AI allows for the organization to train the model on their own code base and standards, thereby ensuring that the code generated adheres to the companies best practices and requirements by mimicking previous development while drastically reducing the development time.

Are there concerns?

As you can assume, there are plenty of concerns with utilizing AI with potentially sensitive and confidential information. We don’t have to look far to see an example of where a service like ChatGPT tapped by a few employees presents concerns for an entire organization. In April of 2023, Samsung discovered that several of its employees leaked confidential company information by utilizing ChatGPT. The employees reportedly used the chatbot to communicate sensitive information and trade secrets, unaware that their conversations were being logged and monitored. While this was likely not intended to be malicious and points more to the desire by many employees to use the technology to make themselves more productive, it does uncover the concerns around the impacts of unfettered access to the technology.

There are no doubt many developers looking to leverage ChatGPT to create code to help them be more productive, and this leaves organizations wondering how to best cope with the usage. Organizations want to make sure that their developers are being as productive as possible, without the risks associated with leaking company information.

Can developers use AI?

Tools like Microsoft Copilot are looking to help with these security and privacy concerns by bringing the AI into the development environment and providing a more controlled space for the developer to gain the efficiencies and assistance they are looking for. The integration of Copilot in software development environments is designed to provide a seamless developer experience while promising to keep the data and input secure. Developers can interact with Copilot through IntelliSense, a code completion feature in the development environment. They can accept suggestions with a keystroke and easily modify or adapt the generated code to fit their specific needs. Tools like these should be embraced by developers in order to take advantage of the potential time savings as well as the reduction of possible defects and security issues being introduced by poor coding practices.


Despite its numerous advantages, the integration of AI into software development introduces security considerations just like any other tool leveraged in the SDLC. Any AI-powered tool must be free from exposing the organization to cyber risks, and humans are still required to ensure that the AI generated content is meeting the requirements and standards of the organization. When these risks are allowed to perpetuate in an organization it can lead to unintended results specifically when there is a lack of human oversight in the created code. There have been some cases of GPT generated code that contained security vulnerabilities or violated programming best practices. This highlights the need for careful oversight and validation when using AI-generated code. The responsibility ultimately falls on developers and organizations to review and verify the code generated by AI systems to ensure its correctness, security, and adherence to coding standards.

Like every technology change that came before, AI will have an impact on the way that software is developed providing efficiencies and creating a faster speed to market for organizations. While these AI technologies hold immense potential to revolutionize software development, the security challenges remain and the risk to the organization is likely to increase while it is still being determined how to harness this amazing leap in technology.