GitHub’s AI-powered coding assistant, GitHub Copilot, may suggest insecure code when the user’s existing codebase contains security issues, according to developer security company Snyk.

GitHub Copilot can replicate existing security issues in code, Snyk said in a blog post published February 22. “This means that existing security debt in a project can make insecure developers using Copilot even less secure,” the company said. However, GitHub Copilot is less likely to suggest insecure code in projects without security issues, as it has a less insecure code context to draw from.

To read this article in full, please click here