THESIS: What security risks are introduced by Generative AI in Software development?
Join us for your thesis work! Gain hands-on experience, work on real projects, and develop your skills in a supportive and innovative environment!
High level description
Generative AI is increasingly used in software development to write code, tests and suggest solutions. This increases productivity but at the cost of several risks, one being security. The AI tools being used may generate code that contains vulnerabilities, reproduce insecure patterns from training data, or create a false sense of confidence among developers who use the tools without critical review. Understanding these risks is essential both for developers to work safely with AI and for companies to make well-informed decisions regarding the integration of AI tools into their workflows.
Who are we looking for?
Bachelor/Master of Science in Computer Science/Engineering
Project description
This thesis will investigate the security risks introduced by using generative AI in the software development process. As tools like GitHub Copilot and ChatGPT become increasingly integrated into development workflows, they offer clear benefits in terms of efficiency and productivity. However, their outputs may also contain subtle vulnerabilities, reuse insecure coding practices from training data, or encourage overreliance by developers who assume correctness without proper verification.
This thesis aims to study these issues systematically through a combination of literature review, code generation experiments, and interviews with practitioners. By analysing AI-generated code using static analysis tools and penetration testing, the thesis will identify common patterns of security weaknesses.
The results will be synthesized into a framework that categorizes risks and highlights mitigation strategies. The goal is to provide practical insights that help developers, teams, and organizations use generative AI responsibly and securely in their software development processes.
Purpose and Scope
• Identify and categorize the most common types of security vulnerabilities present in AI generated code.
• Analyse how the use of generative AI affects developers’ ability to recognize and prevent security risks.
• Evaluate existing mitigation strategies and propose practical guidelines or best practices for safe use of AI tools.
• Provide actionable insights for both developers and organizations regarding secure integration of AI into development workflows.
• Delimit the study to risks associated with code generation in the development phase, excluding broader ethical or legal aspects such as copyright.
An Exciting Journey with Knightec Group
Semcon and Knightec have joined forces as Knightec Group. Together, we are Northern Europe’s leading strategic partner in product and digital service development. With a unique combination of cross-functional expertise and a holistic business understanding, we help our clients realize their strategies – from idea to complete solution.
Practical Information
This is a thesis position, located at our office in Sundsvall. Start date January or March 2026.
Please submit your application as soon as possible, but no later than 2025-11-30. If you have any questions, you are welcome to contact Johanna Edström. Note that due to GDPR, we only accept applications through our careers page.
- Business unit
- Thesis
- Role
- Bachelor thesis
- Locations
- Sundsvall

Already working at Knightec Group?
Let’s recruit together and find your next colleague.