AI coding assistants still need to mature, but they are already helpful according to several attendees of a recent Silicon Valley event for software developers.
Software developers who attended the DeveloperWeek conference in Santa Clara, CA, on February 12 were mostly optimistic about AI coding assistants, after having tried them out. “They certainly provide an opportunity to accelerate software development,” said Jens Wessling, CTO and chief architect at software security company Veracode. Wessling has used GitHub Copilot, Tabnine, and JetBrains AI Assistant. “It’ll be interesting to see how in the long term they address issues like security and correctness, but it’s a step in an interesting direction.”
“They’re great as tools,” said Juan Salas, CTO at Alto, which provides software development services, focusing on Latin America. Having used GitHub Copilot and Cursor, Salas said these tools help save time if users know how to use them. GitHub Copilot, agreed college student Aasritha M., is a “pretty cool extension.” She said she likes how Copilot recognizes the pattern of what the developer is doing and is about to do next. The tool almost never gets this wrong, she said. However, ChatGPT does a better job of finding mistakes in code, she said. Aasritha also has used Mistral and found it to be a pretty good tool, similar to ChatGPT.
Another college student, Sahil Shah, who recently did an internship at Lattice Semiconductor and works with Python, found GitHub Copilot “pretty useful when it comes to scripting.” Ratna Maharjan, lead product software engineer at information and services company Wolters Cluwer, also has found GitHub Copilot and ChatGPT “pretty helpful” in providing code snippets to use in his code. “So far, whatever I have seen is very, very good,” he said.
But attendees also expressed some dissatisfaction with AI tools. There are some things these tools are good at and some things they are bad at, Wessling said. “AI coding tools often do a good job with boilerplate code and producing volumes of code with repeating patterns that are well understood patterns,” he said. “They tend to not do well with libraries and library versions—remaining consistent on which library you’re importing and which method you’re using out of a given library.”
“They occasionally just hallucinate and provide sort of random answers,” Wessling added.
Shah said that the answers GitHub Copilot gives him are “sometimes vague.” Still, he is looking forward to using AI coding tools more, to make his work more effective.
Developer and retired physicist Peter Luh said he tested GitHub Copilot on four math problems during the conference, on February 13. “I’m sorry to report to you that Copilot failed miserably on all four problems,” he said. But Luh believes Copilot might be OK for general chats that include “hallucination” responses.
AI coding tools can give an illusion of getting to a solution quickly, Salas said. He believes AI plus human direction is better than just AI or the human in isolation. He said AI coding assistants definitely will get better, but today users need more technical nuance and need to know what to ask them. “Otherwise, you’re going to be spinning in circles,” due to the challenges their code often presents, he said.