Concerned about your data ending up on corporate servers every time you chat with an AI like Copilot or ChatGPT? A growing trend offers a solution: local AI, installed directly on your computer.
Popular model families include gpt-oss from OpenAI, Gemma from Google, Llama from Meta, Mistral AI’s Mistral, and Chinese models DeepSeek-R1 and Qwen from Alibaba. While locally hosted AI has existed for some time, it’s seen significant improvements in the last year, with software now making it accessible to everyday users.
Still, be aware: you’ll necessitate relatively recent hardware, and the chatbots you access won’t be as powerful or feature-rich as their online counterparts.
Why Install a Local Chatbot?
When you interact with chatbots like ChatGPT, your prompts and shared documents are uploaded to servers owned by companies like OpenAI, Microsoft, and Google. Using the free versions of ChatGPT, Gemini, or Copilot means your conversations may be used to improve future models. Paid subscriptions limit this data usage, but your data still resides on their servers.
Installing a chatbot locally keeps your data on your device and out of the hands of tech companies. This offers a significant privacy advantage.
Local AI also offers privacy and freedom from advertising. (OpenAI has begun integrating ads into ChatGPT, initially in the US and gradually.) Companies release these free models hoping to attract users to paid versions.
a locally installed chatbot can function offline, useful in situations like air travel.
Tech-savvy users can leverage these AIs for advanced tasks, such as voice-controlled smart homes without relying on platforms like Alexa or Google Assistant. They can also customize models to better suit their needs.
What Performance Can You Expect?
The most powerful online AI models require powerful graphics cards typically found in data centers.
However, some AI models installable on personal computers in 2026 are highly effective and may perform comparably to the free versions of Gemini or ChatGPT from early last year.

For simple tasks—like revising an email—you likely won’t notice a significant difference between online and locally installed models on a recent computer.
Online chatbots offer advanced features like internet search, in-depth research, and image editing. Local AIs can perform these tasks, but often require more technical expertise, such as programming a server for internet access.
What Hardware Do You Need?
The more powerful your computer’s graphics card and the more RAM it has, the better the performance you’ll achieve with local AI models.
Theoretically, you’ll need a Mac with at least an M1 chip or a PC with a graphics card with at least 8 GB of RAM.
In practice, a more powerful computer is preferable for a better user experience. Testing on an Apple MacBook Air M4 and an Asus Zenbook Duo yielded satisfactory results. A desktop PC with a gaming graphics card supports even more models.
If you’re planning a new computer purchase, prioritize a graphics card with at least 24 or 32 GB of RAM for acceptable performance without slowing down your system.
How Do You Install Local AI?
LM Studio (https://lmstudio.ai/) is a popular software for installing chatbots on your computer, offering uncomplicated access to models and a relatively simple interface.
Available for Mac, Windows, and Linux, the software can be intimidating, and its French translation is imperfect. However, its basic functions are easy to use, the documentation is clear (https://lmstudio.ai/docs/app/basics), and numerous online tutorials are available. Alternatives like Ollama (https://ollama.com/) are less user-friendly.
Once installed, download a model from the list and “load” it into memory to start using it. You can then open new chat windows to converse with the model, with previous conversations saved for later access.
Which AI Model Should You Choose?
LM Studio’s Model search tab displays models compatible with your computer. You can install others, but they may not function properly.

Start with the models displayed by default based on your computer and selected by the LM Studio team (the staff picks). These are all “open-weight models,” meaning they can be modified, installed, and used for free.
A fact sheet provides details on each model, including its ability to analyze images, strengths (programming or role-playing conversations), compatibility with advanced tools (internet access), and size.
You’ll need around 4 GB of disk space for the smallest models, like Google’s Gemma 3 4B, and over 10 GB for medium-sized models, like OpenAI’s gpt-oss-20b.
