Top Related Projects
🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Minimal web UI for ChatGPT.
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)
Quick Overview
Chanzhaoyu/chatgpt-web is an open-source project that provides a user-friendly web interface for ChatGPT. It allows users to interact with ChatGPT through a clean and responsive web application, making it easier to access and use the AI model without the need for complex setups or API integrations.
Pros
- Easy to deploy and use, with a simple and intuitive interface
- Supports multiple API endpoints, including OpenAI and custom backends
- Offers various customization options for appearance and functionality
- Includes features like conversation history and export capabilities
Cons
- Requires an API key, which may involve costs depending on usage
- Limited advanced features compared to official ChatGPT interfaces
- May require some technical knowledge for setup and customization
- Potential for API rate limiting or service interruptions
Getting Started
To set up the chatgpt-web project:
-
Clone the repository:
git clone https://github.com/Chanzhaoyu/chatgpt-web.git -
Navigate to the project directory:
cd chatgpt-web -
Install dependencies:
pnpm install -
Copy the example environment file and edit it with your API key:
cp .env.example .env -
Start the development server:
pnpm dev -
Build for production:
pnpm build
Visit http://localhost:3000 to access the web interface. Remember to configure your API key and other settings in the .env file before use.
Competitor Comparisons
🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
Pros of ChatGPT
- Cross-platform desktop application (Windows, macOS, Linux)
- Offers additional features like prompt library and text-to-speech
- Regular updates and active development
Cons of ChatGPT
- Larger application size due to being an Electron-based desktop app
- May require more system resources compared to a web-based solution
- Less customizable for self-hosting scenarios
Code Comparison
ChatGPT (TypeScript):
export const chatgpt = () => {
ipcMain.handle('chatgpt-api', async (_, messages: ChatMessage[]) => {
try {
const completion = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages,
});
return completion.data.choices[0].message;
} catch (err: any) {
console.error(err);
return null;
}
});
};
chatgpt-web (JavaScript):
async function chatConfig() {
const response = await axios.post(
'/api/chat-process',
{ prompt: message, options: { conversationId: conversationId } },
{ signal: controller.signal }
)
return response.data
}
Both repositories provide interfaces to interact with ChatGPT, but ChatGPT offers a desktop application experience, while chatgpt-web is designed as a web-based solution. The code snippets show different approaches to handling API requests, with ChatGPT using Electron's IPC for communication and chatgpt-web utilizing Axios for HTTP requests.
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Pros of NextChat
- Built with Next.js, offering better performance and SEO capabilities
- Supports multiple AI models beyond just ChatGPT
- More extensive customization options for user interface
Cons of NextChat
- Potentially more complex setup due to additional features
- May require more resources to run compared to chatgpt-web
- Less focused on simplicity, which could be overwhelming for some users
Code Comparison
chatgpt-web:
<script setup lang="ts">
import { computed, ref } from 'vue'
import { NInput } from 'naive-ui'
import { useChat } from '@/hooks/useChat'
</script>
NextChat:
import { useState, useEffect, useRef } from 'react'
import { useRouter } from 'next/router'
import { useTranslation } from 'next-i18next'
import { serverSideTranslations } from 'next-i18next/serverSideTranslations'
Both projects use modern JavaScript frameworks, but NextChat leverages Next.js and React, while chatgpt-web uses Vue. NextChat's code shows integration with internationalization libraries, indicating more advanced localization features. chatgpt-web's code snippet reveals a simpler structure, focusing on core chat functionality.
Minimal web UI for ChatGPT.
Pros of chatgpt-demo
- Simpler and more lightweight implementation
- Easier to customize and extend
- Better documentation and examples
Cons of chatgpt-demo
- Fewer features out of the box
- Less polished user interface
- Limited multi-language support
Code Comparison
chatgpt-demo:
export async function fetchChatCompletion(options: ChatRequest) {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(options),
})
return response.json()
}
chatgpt-web:
async function fetchChatAPI<T = any>(
path: string,
data?: any,
method: Method = 'post',
) {
return await request<T>({
url: `${BASE_URL}/${path}`,
method,
data,
})
}
The code comparison shows that chatgpt-demo uses a simpler approach for API requests, while chatgpt-web employs a more flexible and reusable function. This reflects the overall design philosophy of each project, with chatgpt-demo focusing on simplicity and chatgpt-web offering more advanced features and customization options.
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
Pros of ChuanhuChatGPT
- More extensive language support, including Chinese and English interfaces
- Offers additional features like API key management and model selection
- Supports multiple chat modes, including standard chat and academic paper writing assistance
Cons of ChuanhuChatGPT
- Less polished user interface compared to chatgpt-web
- Requires more setup and configuration for full functionality
- May have a steeper learning curve for non-technical users
Code Comparison
chatgpt-web:
const message = ref('')
const loading = ref(false)
const controller = ref<AbortController>()
ChuanhuChatGPT:
def predict(self, inputs, max_length=512, top_p=0.7, temperature=0.95):
input_ids = self.tokenizer.encode(inputs, return_tensors="pt")
with torch.no_grad():
outputs = self.model.generate(input_ids, max_length=max_length, top_p=top_p, temperature=temperature)
return self.tokenizer.decode(outputs[0], skip_special_tokens=True)
The code snippets highlight the different approaches:
- chatgpt-web uses Vue.js for frontend development
- ChuanhuChatGPT employs Python for backend processing and model interaction
Both projects aim to provide ChatGPT-like functionality, but ChuanhuChatGPT offers more advanced features at the cost of complexity, while chatgpt-web focuses on a simpler, more user-friendly approach.
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
Pros of chatbox
- Cross-platform support (Windows, macOS, Linux)
- Offers a desktop application with a more native user experience
- Includes additional features like text-to-speech and custom API endpoints
Cons of chatbox
- Requires installation and updates, unlike the web-based chatgpt-web
- May have a steeper learning curve for non-technical users
- Potentially higher resource usage compared to a web application
Code comparison
chatbox (TypeScript):
const handleSubmit = async () => {
if (inputMessage.trim() === '') return
const newMessage: Message = {
role: 'user',
content: inputMessage,
}
setMessages([...messages, newMessage])
setInputMessage('')
await sendMessage(newMessage)
}
chatgpt-web (Vue.js):
<script setup lang="ts">
import { ref, onMounted } from 'vue'
const messageList = ref<ChatMessage[]>([])
const loading = ref<boolean>(false)
const controller = ref<AbortController>()
onMounted(() => {
scrollToBottom()
})
</script>
Both projects aim to provide user-friendly interfaces for interacting with ChatGPT, but they take different approaches. chatbox offers a desktop application with broader platform support and additional features, while chatgpt-web provides a simpler, web-based solution that may be more accessible for some users. The code snippets show different implementation languages and frameworks, reflecting their distinct architectures.
An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)
Pros of BetterChatGPT
- More feature-rich, including conversation management and export options
- Supports multiple AI models beyond just ChatGPT
- Offers a more customizable user interface
Cons of BetterChatGPT
- May have a steeper learning curve due to additional features
- Potentially higher resource usage due to expanded functionality
Code Comparison
BetterChatGPT (React):
const Chat = () => {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
// ... more complex state management
};
chatgpt-web (Vue):
<script setup lang="ts">
import { ref } from 'vue'
const messageList = ref<Message[]>([])
const loading = ref<boolean>(false)
</script>
BetterChatGPT uses React and appears to have more complex state management, while chatgpt-web uses Vue with simpler state handling. BetterChatGPT's codebase reflects its broader feature set, potentially making it more challenging to maintain but offering greater flexibility. chatgpt-web's simpler structure may be easier to understand and modify for basic use cases.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
ChatGPT Web
Disclaimer: This project is only published on GitHub, based on the MIT license, free and for open source learning usage. And there will be no any form of account selling, paid service, discussion group, discussion group and other behaviors. Beware of being deceived.

- ChatGPT Web
Introduction
Supports dual models and provides two unofficial ChatGPT API methods
| Method | Free? | Reliability | Quality |
|---|---|---|---|
ChatGPTAPI(gpt-3.5-turbo-0301) | No | Reliable | Relatively stupid |
ChatGPTUnofficialProxyAPI(web accessToken) | Yes | Relatively unreliable | Smart |
Comparison:
ChatGPTAPIusesgpt-3.5-turbothroughOpenAIofficialAPIto callChatGPTChatGPTUnofficialProxyAPIuses unofficial proxy server to accessChatGPT's backendAPI, bypassCloudflare(dependent on third-party servers, and has rate limits)
Warnings:
- You should first use the
APImethod - When using the
API, if the network is not working, it is blocked in China, you need to build your own proxy, never use someone else's public proxy, which is dangerous. - When using the
accessTokenmethod, the reverse proxy will expose your access token to third parties. This should not have any adverse effects, but please consider the risks before using this method. - When using
accessToken, whether you are a domestic or foreign machine, proxies will be used. The default proxy is pengzhile'shttps://ai.fakeopen.com/api/conversation. This is not a backdoor or monitoring unless you have the ability to flip overCFverification yourself. Use beforehand acknowledge. Community Proxy (Note: Only these two are recommended, other third-party sources, please identify for yourself) - When publishing the project to public network, you should set the
AUTH_SECRET_KEYvariable to add your password access, you should also modify thetitleinindex. htmlto prevent it from being searched by keywords.
Switching methods:
- Enter the
service/.env.examplefile, copy the contents to theservice/.envfile - To use
OpenAI API Key, fill in theOPENAI_API_KEYfield (get apiKey) - To use
Web API, fill in theOPENAI_ACCESS_TOKENfield (get accessToken) OpenAI API Keytakes precedence when both exist
Environment variables:
See all parameter variables here
Roadmap
[â] Dual models
[â] Multi-session storage and context logic
[â] Formatting and beautification of code and other message types
[â] Access control
[â] Data import/export
[â] Save messages as local images
[â] Multilingual interface
[â] Interface themes
[â] More...
Prerequisites
Node
node requires version ^16 || ^18 || ^19 (node >= 14 needs fetch polyfill installation), use nvm to manage multiple local node versions
node -v
PNPM
If you haven't installed pnpm
npm install pnpm -g
Filling in the Key
Get Openai Api Key or accessToken and fill in the local environment variables Go to Introduction
# service/.env file
# OpenAI API Key - https://platform.openai.com/overview
OPENAI_API_KEY=
# change this to an `accessToken` extracted from the ChatGPT site's `https://chat.openai.com/api/auth/session` response
OPENAI_ACCESS_TOKEN=
Install Dependencies
For the convenience of "backend developers" to understand the burden, the front-end "workspace" mode is not adopted, but separate folders are used to store them. If you only need to do secondary development of the front-end page, delete the
servicefolder.
Backend
Enter the folder /service and run the following commands
pnpm install
Frontend
Run the following commands at the root directory
pnpm bootstrap
Run in Test Environment
Backend Service
Enter the folder /service and run the following commands
pnpm start
Frontend Webpage
Run the following commands at the root directory
pnpm dev
Environment Variables
API available:
OPENAI_API_KEYandOPENAI_ACCESS_TOKENchoose oneOPENAI_API_MODELSet model, optional, default:gpt-3.5-turboOPENAI_API_BASE_URLSet interface address, optional, default:https://api.openai.comOPENAI_API_DISABLE_DEBUGSet interface to close debug logs, optional, default: empty does not close
ACCESS_TOKEN available:
OPENAI_ACCESS_TOKENandOPENAI_API_KEYchoose one,OPENAI_API_KEYtakes precedence when both existAPI_REVERSE_PROXYSet reverse proxy, optional, default:https://ai.fakeopen.com/api/conversation, Community (Note: Only these two are recommended, other third party sources, please identify for yourself)
Common:
AUTH_SECRET_KEYAccess permission key, optionalMAX_REQUEST_PER_HOURMaximum number of requests per hour, optional, unlimited by defaultTIMEOUT_MSTimeout, unit milliseconds, optionalSOCKS_PROXY_HOSTandSOCKS_PROXY_PORTtake effect together, optionalSOCKS_PROXY_PORTandSOCKS_PROXY_HOSTtake effect together, optionalHTTPS_PROXYSupporthttp,https,socks5, optionalALL_PROXYSupporthttp,https,socks5, optional
Packaging
Use Docker
Docker Parameter Examples

Docker build & Run
docker build -t chatgpt-web .
# Foreground running
docker run --name chatgpt-web --rm -it -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key chatgpt-web
# Background running
docker run --name chatgpt-web -d -p 127.0.0.1:3002:3002 --env OPENAI_API_KEY=your_api_key chatgpt-web
# Run address
http://localhost:3002/
Docker compose
version: '3'
services:
app:
image: chenzhaoyu94/chatgpt-web # always use latest, pull the tag image again to update
ports:
- 127.0.0.1:3002:3002
environment:
# choose one
OPENAI_API_KEY: sk-xxx
# choose one
OPENAI_ACCESS_TOKEN: xxx
# API interface address, optional, available when OPENAI_API_KEY is set
OPENAI_API_BASE_URL: xxx
# API model, optional, available when OPENAI_API_KEY is set, https://platform.openai.com/docs/models
# gpt-4, gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4-turbo-preview, gpt-4-0125-preview, gpt-4-1106-preview, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, text-davinci-003, text-davinci-002, code-davinci-002
OPENAI_API_MODEL: xxx
# reverse proxy, optional
API_REVERSE_PROXY: xxx
# access permission key, optional
AUTH_SECRET_KEY: xxx
# maximum number of requests per hour, optional, unlimited by default
MAX_REQUEST_PER_HOUR: 0
# timeout, unit milliseconds, optional
TIMEOUT_MS: 60000
# Socks proxy, optional, take effect with SOCKS_PROXY_PORT
SOCKS_PROXY_HOST: xxx
# Socks proxy port, optional, take effect with SOCKS_PROXY_HOST
SOCKS_PROXY_PORT: xxx
# HTTPS proxy, optional, support http,https,socks5
HTTPS_PROXY: http://xxx:7890
OPENAI_API_BASE_URLOptional, available whenOPENAI_API_KEYis setOPENAI_API_MODELOptional, available whenOPENAI_API_KEYis set
Prevent Crawlers
nginx
Fill in the following configuration in the nginx configuration file to prevent crawlers. You can refer to the docker-compose/nginx/nginx.conf file to add anti-crawler methods
# Prevent crawlers
if ($http_user_agent ~* "360Spider|JikeSpider|Spider|spider|bot|Bot|2345Explorer|curl|wget|webZIP|qihoobot|Baiduspider|Googlebot|Googlebot-Mobile|Googlebot-Image|Mediapartners-Google|Adsbot-Google|Feedfetcher-Google|Yahoo! Slurp|Yahoo! Slurp China|YoudaoBot|Sosospider|Sogou spider|Sogou web spider|MSNBot|ia_archiver|Tomato Bot|NSPlayer|bingbot")
{
return 403;
}
Deploy with Railway
Railway Environment Variables
| Environment variable name | Required | Remarks |
|---|---|---|
PORT | Required | Default 3002 |
AUTH_SECRET_KEY | Optional | Access permission key |
MAX_REQUEST_PER_HOUR | Optional | Maximum number of requests per hour, optional, unlimited by default |
TIMEOUT_MS | Optional | Timeout, unit milliseconds |
OPENAI_API_KEY | OpenAI API choose one | apiKey required for OpenAI API (get apiKey) |
OPENAI_ACCESS_TOKEN | Web API choose one | accessToken required for Web API (get accessToken) |
OPENAI_API_BASE_URL | Optional, available when OpenAI API | API interface address |
OPENAI_API_MODEL | Optional, available when OpenAI API | API model |
API_REVERSE_PROXY | Optional, available when Web API | Web API reverse proxy address Details |
SOCKS_PROXY_HOST | Optional, take effect with SOCKS_PROXY_PORT | Socks proxy |
SOCKS_PROXY_PORT | Optional, take effect with SOCKS_PROXY_HOST | Socks proxy port |
SOCKS_PROXY_USERNAME | Optional, take effect with SOCKS_PROXY_HOST | Socks proxy username |
SOCKS_PROXY_PASSWORD | Optional, take effect with SOCKS_PROXY_HOST | Socks proxy password |
HTTPS_PROXY | Optional | HTTPS proxy, support http,https, socks5 |
ALL_PROXY | Optional | All proxies, support http,https, socks5 |
Note: Modifying environment variables on
Railwaywill re-Deploy
Deploy with Sealos
Environment variables are consistent with Docker environment variables
Package Manually
Backend Service
If you don't need the
nodeinterface of this project, you can omit the following operations
Copy the service folder to the server where you have the node service environment.
# Install
pnpm install
# Pack
pnpm build
# Run
pnpm prod
PS: It is also okay to run pnpm start directly on the server without packing
Frontend Webpage
-
Modify the
VITE_GLOB_API_URLfield in the.envfile at the root directory to your actual backend interface address -
Run the following commands at the root directory, then copy the files in the
distfolder to the root directory of your website service
[Reference](https://cn.vitejs.dev/guide/static -deploy.html#building-the-app)
pnpm build
FAQ
Q: Why does Git commit always report errors?
A: Because there is a commit message verification, please follow the Commit Guide
Q: Where to change the request interface if only the front-end page is used?
A: The VITE_GLOB_API_URL field in the .env file at the root directory.
Q: All files explode red when saving?
A: vscode please install the recommended plug-ins for the project, or manually install the Eslint plug-in.
Q: No typewriter effect on the front end?
A: One possible reason is that after Nginx reverse proxy, buffer is turned on, then Nginx will try to buffer some data from the backend before sending it to the browser. Please try adding proxy_buffering off; after the reverse proxy parameter, then reload Nginx. Other web server configurations are similar.
Contributing
Please read the Contributing Guide before contributing
Thanks to everyone who has contributed!
Acknowledgements
Thanks to JetBrains SoftWare for providing free Open Source license for this project.
Sponsors
If you find this project helpful and can afford it, you can give me a little support. Anyway, thanks for your support~
WeChat Pay
Alipay
License
MIT © [ChenZhaoYu]
Top Related Projects
🔮 ChatGPT Desktop Application (Mac, Windows and Linux)
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Minimal web UI for ChatGPT.
GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
User-friendly Desktop Client App for AI Models/LLMs (GPT, Claude, Gemini, Ollama...)
An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot