JetBrains IDEs add local AI code completion – TechTarget

author
4 minutes, 28 seconds Read

Paying subscribers to most JetBrains IDEs now have a built-in local AI code completion option that uses a separately trained small language model to suggest full lines of code, along with built-in checks for code correctness.

The feature was shipped in version 2024.1 this week for JetBrains IDEs including IntelliJ IDEA, PyCharm, WebStorm, PhpStorm, GoLand and RubyMine, which support Java, Kotlin, Python, JavaScript, TypeScript, CSS, PHP, Go and Ruby programming languages. In a future release, the company will add full line code completion support in its Rider, RustRover and CLion Nova, which support C#, Rust and C++. JetBrains’ local AI code completion models also verify that suggestions for completing lines of code include real variables and methods, and that the syntax is correct.

“One of the main pain points of AI code completion tools is that they sometimes generate methods or variables which do not exist, and [the user] has to accept the suggestion and then correct it,” said Daniel Savenkov, a senior machine learning engineer in JetBrains’ full line code completion team. “With full line code completion, before we show the suggestion, we do all those correctness checks.”

JetBrains already offers AI Assistant, a cloud-based plugin that can generate entire blocks of code but does not include variable verification and correctness checks yet, according to company officials. That feature is planned for the cloud service in a future release.

Having AI code completion built into the JetBrains IDE also means the model is aware of the specific project a developer is working on, said Michael Kennedy, founder of Talk Python Training, a Python Software Foundation Fellow and a user of JetBrains’ PyCharm IDE.

“The JetBrains tool understands all of your code and all of your projects — if I were to go to ChatGPT and paste in 10 lines of code [and say] ‘Help me with this,’ it will do that, but only in that little isolated context,” Kennedy said. “But the JetBrains one says ‘Oh, over here you have this file that does that, this file that does that,’ and so it combines all that information to help you work within that context.”

JetBrains local code completion vs. AI Assistant

Because it fills in whole blocks of code, the cloud-based AI Assistant might be better suited to developers who aren’t sure how to implement a broader idea for an application, while full-line code completion saves only about 20% of keystrokes for developers, according to a company blog post.

Another advantage of local AI code completion is that security and privacy-conscious users don’t have to send data to a cloud-based large language model (LLM) over a network, as they must do with AI assistant services such as GitHub Copilot.

The chance that an open source project will successfully sue you for a single line is zero, [but] as the size of generated code increases — which is what developers will want — companies must be alert.
Andrew CornwallAnalyst, Forrester Research

“That will keep your code from leaking out over the network,” said Andrew Cornwall, an analyst at Forrester Research, of the JetBrains approach. “The price you pay is that your developers will be less productive than if they could generate more code at once.”

To further assuage general copyright concerns with AI assistants, JetBrains trained its own programming language-specific 100-million-parameter small language model using a data set of open source code with permissive licenses for this release.

Permissive licenses on training data don’t totally resolve potential copyright concerns, according to Cornwall, but a single line represents a lower risk than a whole block of code or more.

“The chance that an open source project will successfully sue you for a single line is zero, [but] as the size of generated code increases — which is what developers will want – companies must be alert,” Cornwall said. “At some point, the generated code could get big enough to warrant a legal notice that your code contains IP from a permissively licensed project.”

Interest in local AI code generation grows

JetBrains IDEs are ahead of developer tools integrated with GitHub Copilot such as Microsoft’s Visual Studio Code in offering local AI code generation, but they aren’t totally unique in the market. AI Coding assistants that can run locally are also available from vendors and open source projects including FauxPilot, LocalPilot, Salesforce CodeGen, Code Llama, StarCoder and Tabnine.

Visual Studio Code also remains the most widely used IDE in the world without local AI support, said Andy Thurai, an analyst at Constellation Research.

“GitHub Copilot gained so much popularity because of its integration and seamless code generation within Visual Studio,” Thurai said. “However, licensing concerns and the telemetry the software sends back to Microsoft have been a concern for a lot of customers, so locally run code generation AI models became popular.”

It’s likely GitHub will eventually offer a local version of Copilot, said Michele Rosen, an analyst at IDC. 

“[GitHub has] been very focused on ensuring low latency for code generation, which is one of the strengths of local models,” Rosen said. “For developers who need to keep their code on-prem, small language models are closing the gap between the tools they can use and those that leverage LLMs.”

Beth Pariseau, senior news writer for TechTarget Editorial, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out on X @PariseauTT.

This post was originally published on this site

Similar Posts