Skip to main content
HomeTutorialsArtificial Intelligence (AI)

Introduction to ChatGPT Next Web (NextChat)

Learn everything about a versatile open-source application that uses OpenAI and Google AI APIs to provide you with a better user experience. It's available on desktop and browser and can even be privately deployed.
Updated Mar 2024  · 7 min read

We all want a chat application that offers us more control, additional options, and the capability to share chat conversations privately. If you are looking for a better chat interface than ChatGPT, then NextChat, formerly known as ChatGPT Next Web, is the perfect choice.

Here, we will explore ChatGPT Next Web, learn how to use both the web and desktop application, and discover how to deploy it on the Vercel server with just one click.

This tutorial is designed for all levels and is easy to follow. So, let's dive in and explore the exciting world of ChatGPT Next Web together!

Why ChatGPT Next Web?

ChatGPT Next Web, now known as NextChat, is a versatile open-source application that uses OpenAI and Google AI API to access GPT-4, GPT-3.5, and Gemini-Pro. It provides multimodality out of the box.

The tool is like a ChatGPT but with more customization, a user-friendly interface, and flexibility for various applications like chatbots, summarization, content generation, research, code generation, and context learning.

ChatGPT Next Web on GitHub

ChatGPT Next Web on GitHub

Users can provide the context to the chat and change various hyperparameters of the model to customize the model response.

It is available on desktop, web app, and you can even deploy your own instance of the web application for free using the Vercel platform.

You can learn how to effectively use ChatGPT itself with DataCamp’s Introduction to ChatGPT course. Discover best practices for writing prompts and explore common business use cases for this powerful AI tool.

ChatGPT Next Web Features

ChatGPT Next Web offers a range of powerful features for users.

  1. Deploy for free on Vercel: This feature allows users to easily deploy the ChatGPT Next Web application on Vercel, a cloud platform for hosting web applications, with just one click and in less than a minute.
  2. Compact client (~5MB): The ChatGPT Next Web client is designed to be lightweight and compact, with a file size of around 5 MB.
  3. Available on Linux/Windows/MacOS: Download and install on various operating systems.
  4. Self-deployed LLMs: It is fully compatible with self-deployed LLMs, recommended for use with RWKV-Runner or LocalAI.
  5. Privacy: The application prioritizes user privacy by storing all data locally in the browser, ensuring that sensitive information is not sent to any external servers or third-party services.
  6. Markdown support: ChatGPT Next Web supports Markdown formatting, allowing users to display visually appealing and informative messages.
  7. Responsive design: The application features a responsive design that adapts to various screen sizes and devices. It also includes a dark mode option for users who prefer a different color scheme.
  8. Support streaming response: It is designed for fast performance, with minimal first screen loading, and it supports streaming responses.
  9. Mask: The latest version of ChatGPT Next Web introduces a new feature that allows users to create, share, and debug their chat tools using prompt templates, also known as "Mask."
  10. Awesome prompts: The application features an extensive library of "awesome prompts." These prompts offer a wide range of use cases and can be used to enhance the chatbot's capabilities.
  11. Support long conversations: ChatGPT Next Web automatically compresses chat history, allowing users to engage in long conversations without consuming excessive amounts of tokens.
  12. Multi-language interface: This allows users from different regions to interact with the application in their preferred language.

Getting Started with ChatGPT Next Web

In this section, we will learn how to use ChatGPT Next Web's official application, create a Gemini API key, set up the application, and generate some code.

Web application

ChatGPT Next Web is now NextChat as it has become more than just a web application.

We can access the web app by setting up an API. When we try to write a prompt and press enter, it will prompt us to set an API key to access the models from OpenAI or Google.

NextChat web application

If you don't see the authentication page, go to the authorization page and provide it with a Google AI API key.

To get the API key, we need to create a Google account and go to Google AI Studio. Then, click the “Get API Key” button on the left panel. Copy the API key and paste it into the third input box.

If you want to access the Gemini model through Python, follow the Introducing Google Gemini API tutorial. It's a simple guide on how to access state-of-the-art models.

setting up API key to access the models

After that, click on the robot (🤖) button above the input text bar to open the list of available models. Select “gemini-pro” and start prompting.

selecting the Gemini Pro model in NextChat

We have asked Gemini to generate the Python code for the snake game. As you can see from the image below, It has done a great job.

writing prompt to generate the Python code for snake game

Desktop application

We can also download the application on Windows, Linux, and MacOS. In our case, we will be downloading the NextChat for Windows operating system '.exe' file.

Installing NextChat from GitHub

Follow the simple steps provided to install the application on the Windows operating system. Once you have installed the application, it should launch automatically.

Instead of an authentication page, it asks you to go to the settings page and add the valid API key.

NextChat Desktop App

Go to the Settings page, scroll down, and change the “Model Provider” option to Google. Paste your Google API key.

Setting up NextChat Desktop App

We can even change the model setting to get a better response, but for now, we will keep these settings default, as per the image below.

NextChat Desktop App model settings

Select the “gemini-pro” model and start asking questions. As you can see, it is fast and quite intuitive.

NextChat Desktop App demo gif

We can also upload an image and ask Gemini questions about it or use it as a context. To do that, change the model to “gemini-pro-vision”.

Selecting the Gemini Pro Vision model

Upload the image and type out the question.

Using Gemini Pro Vision

Perfect. The model understood the text on the image and provided a detailed explanation of the Golang code.

Output of Gemini Pro Vision

Deploy the web app

If you go to the ChatGPT-Next-Web GitHub repository, you will find that you can even deploy your private web application on Vercel, Zeabur, and Gitpod.

In our case, we will deploy the web app to the Vercel platform by clicking on the "Deploy" button, as shown below.

One click deploy of NextChat on Vercel

This will redirect you to a new tab, where you have to log in to your GitHub account. Why? Because it is a private instance, so any changes made to the application or environment variable will remain secure on your end.

Setting up Git repository for Vercel NextChat app

Then, select the GitHub account and write the name of the repository. Make sure you are creating a private Git repository.

creating GitHub repo

Add the API keys.

For now, we will only provide API keys of Google Gemini and fill the rest of the input with random text. It doesn't matter. You can always change your API key by going to the settings.

adding API keys to access the models

After 5 minutes, the application will be deployed on https://chat-gpt-next-web-two-green-22.vercel.app/. Go to Settings, provide your Google API key, and start using it.

Setting up API key in the settings.

We have asked NextChat to write an API code in GoLang for machine learning inference, and it has done a great job.

Deployed NextChat webapp demo

Exploring the Key Functions of ChatGPT NextChat

In this section, we will try some key features that will enhance your overall prompting experience.

Custom models

We can load custom models from Microsoft Azure, Google, and OpenAI. These models are fine-tuned on the custom dataset that you provided.

Learn how to fine-tune your own model using the Python OpenAI API by following Fine-Tuning OpenAI's GPT-4 step-by-step guide.

To add the custom model, go to the Settings page, scroll down to the “Model Provider” section, and try to write all of the custom models’ names separated by a comma.

adding custom model to NextChat

After that, start the new chat and select the custom model by clicking on the robot button (🤖).

selecting the custom model

This way, we can easily experience our fine-tuned model and test it for various prompts.

Awesome Prompts

NextChat provides access to the Awesome Prompts repo. In short, we can access some top-quality prompt templates right in your text box by typing “/” and then searching for the prompt keyword.

In our case, we are looking for Python prompts.

Accessing Awesome Prompts using "/"

Once you've found a prompt that works, modify it to fit your needs and press enter. As we can see, our chatbot is acting as a Python interpreter.

using Awesome prompt

We can also add our favorite prompt templates to the prompt list by clicking the Edit button in the “Prompt List” section of Settings.

adding a new Prompt to the prompt template list.

ChatGPT Next Web Mask

Just like ChatGPT’s GPTs, NextChat offers a similar functionality called “Mask.” Mask is not just a prompt template; it comes with model settings, context settings, and all kinds of options to improve the model output.

We can create our own Mask or access hundreds of community-shared masks by clicking on the “Mask” button on the left panel.

NextChat Mask

Most of the masks are available in Chinese. To access English-only Masks, click on “Find More” and then change the language to “English.”

Checking out NextChat masks English section.

If we click on the “View” button, we can see and download all of the settings, system prompts, and context for the Mask.

NextChat Mask settings

Let’s load up the GitHub Copilot Mask and try to ask it to generate some code for the image classifier. For this, we must provide the OpenAI API key by going to the Settings page.

Checking out GitHub Copilot mask

Learn to access the large language models using OpenAI Python API by following DataCamp's article, A Beginner's Guide to Using the ChatGPT API.

Final Thoughts

ChatGPT Next Web (NextChat) provides a cost-effective solution for users who want to save money. Instead of paying a monthly subscription fee for ChatGPT Pro, you can pay for the API on a pay-as-you-go basis. This means that you will be charged based on your usage, which, on average, costs around $2 per month if you are using it occasionally.

Additionally, ChatGPT Next Web offers greater customization and flexibility, allowing you to access your fine-tuned model on the desktop application.

I hope that this tutorial has provided you with a comprehensive understanding of ChatGPT NextChat and its capabilities. By following the step-by-step instructions and exploring the various features and customization options, you can now create a chatbot that perfectly suits your needs.

New to AI? Learn how LLMs work with the AI Fundamentals skill track and build a strong foundation to thrive in the AI-powered landscape.


Photo of Abid Ali Awan
Author
Abid Ali Awan

I am a certified data scientist who enjoys building machine learning applications and writing blogs on data science. I am currently focusing on content creation, editing, and working with large language models.

Topics

Start Your AI Journey Today!

Course

Generative AI Concepts

2 hr
21.7K
Discover how to begin responsibly leveraging generative AI. Learn how generative AI models are developed and how they will impact society moving forward.
See DetailsRight Arrow
Start Course
See MoreRight Arrow
Related

7 Best Open Source Text-to-Speech (TTS) Engines

Explore 7 common free, open-source text-to-speech engines for your ML projects.
Austin Chia's photo

Austin Chia

7 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

GPTCache Tutorial: Enhancing Efficiency in LLM Applications

Learn how GPTCache retrieves cached results instead of generating new responses from scratch.
Laiba Siddiqui's photo

Laiba Siddiqui

8 min

PostgresML Tutorial: Doing Machine Learning With SQL

An introductory article on how to perform machine learning using SQL statements in PostgresML.
Bex Tuychiev's photo

Bex Tuychiev

11 min

LLM Classification: How to Select the Best LLM for Your Application

Discover the family of LLMs available and the elements to consider when evaluating which LLM is the best for your use case.
Andrea Valenzuela's photo

Andrea Valenzuela

15 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

See MoreSee More