OpenUI – Open-source tool for web interfaces in 1 prompt!

22/04/2025

Want to create beautiful web interfaces without mastering code? OpenUI makes it happen! Simply describe your idea, and this open-source tool transforms it into a real interface in seconds. Join Tentenai.vn as we explore what OpenUI is, its features, and how to use this game-changing tool!

What is OpenUI?

OpenUI is an open-source tool that makes creating user interfaces (UI) fast and fun. Just describe your vision in words, and OpenUI instantly turns it into a visual interface. Developed by W&B, this tool is a favorite among tech enthusiasts for prototyping applications, especially those powered by large language models (LLMs).

Key features of OpenUI

Fast and user-friendly

Describe your interface, and OpenUI generates HTML code or converts it to React, Svelte, or Web Components. Preview your design in just seconds!

OpenUI mã nguồn mở tạo giao diện web

Flexible and easy to integrate

OpenUI supports multiple AI models, including OpenAI, Groq, Gemini, Anthropic, and Ollama. It also integrates with LiteLLM to connect with most LLM services available.

OpenUI mã nguồn mở tạo giao diện web

Open-Source

Freely available on GitHub, OpenUI is accessible to beginners and seasoned developers alike.

Who Should Use OpenUI?

OpenUI is perfect for:

  • Developers: Quickly prototype interfaces for web or mobile apps.
  • UI/UX Designers: Test design ideas without writing code.
  • AI Projects: Build interfaces for LLM-based applications.

Learners: Beginners can practice designing simple interfaces.

How to Install and Use OpenUI

Prerequisites

1. Environment:

  • Python 3.8 or higher.
  • Node.js and npm (for the frontend).
  • Docker (optional, for containerized setup).
  • Git (to clone the repository).

2. API Keys (Optional):

  • For AI models like OpenAI, you’ll need an OPENAI_API_KEY.
  • Other models (Grok, Anthropic, Gemini, etc.) require their respective API keys.
  • For local models (e.g., Ollama), set OPENAI_API_KEY=xxx

3. Supported OS: Linux, macOS, Windows.

Installation steps

Step 1: Clone the Repository

Open a terminal and run:

				
					git clone https://github.com/wandb/openui
cd openui
				
			

Step 2: Set Up the Environment

OpenUI has two components: backend (Python) and frontend (TypeScript/React).

1. Backend Setup: using python and related liblary

a. Create a virtual environment:

				
					python -m venv venv
source venv/bin/activate  # Trên Windows: venv\Scripts\activate
				
			

b. Navigate to the backend folder and install:

				
					cd backend
pip install .

				
			

c. Install additional dependencies (if needed): For extra AI models via LiteLLM, run:

				
					pip install litellm
				
			

d. Configure API Keys: For local models (e.g., Ollama):

				
					export OPENAI_API_KEY=xxx
				
			

For other models, replace xxx with the actual key:

				
					export OPENAI_API_KEY=your_openai_key
export ANTHROPIC_API_KEY=your_anthropic_key  # Tùy chọn
export GROQ_API_KEY=your_groq_key            # Tùy chọn
				
			

2. Frontend Setup: using Vite, React and Taiwind CSS

a. Navigate to the frontend folder:

				
					cd ../frontend
				
			

b. Install Node.js dependencies:

				
					npm install
				
			

c. Build the frontend (generates static files for the backend):

				
					npm run build
				
			

This copies the built files to backend/openui/dist.

3. Optional: Set Up Ollama (for Local Models)

Nếu bạn muốn dùng các mô hình cục bộ như Llava:

a. Install Ollama (follow instructions at ollama.ai). 

b. Pull the Llava model:

				
					ollama pull llava
				
			

c. Ensure Ollama is running:

				
					ollama serve
				
			

If Ollama isn’t running on http://127.0.0.1:11434, set:

				
					export OLLAMA_HOST=http://your_ollama_host:port
				
			

Step 3: Run the Application

You can run OpenUI locally or via Docker.

Option 1: Run Locally:

1. Backend: From the backend folder, run:

				
					python -m openui --dev
				
			

The --dev flag enables development mode with auto-reload.

Run frontend (From difference terminal ): from frontend, run:

				
					npm run dev
				
			

Access the interface at http://localhost:5173

Option 2: Run with Docker: From the project’s root folder, run

				
					docker-compose up
				
			

This starts both the backend and Ollama (if configured). Access the app at http://localhost:7878.

Using OpenUI

1. Access the Interface:

Run local, open: http://localhost:5173 (or http://localhost:7878 if using Docker).

OpenUI mã nguồn mở tạo giao diện web

2. Create UI:

  • Enter a natural language description (e.g., “Create a blue button with ‘Click me’ text”).
  • OpenUI uses the selected AI model to generate HTML and display the interface instantly.
  • Edit the UI or convert the code to React, Svelte, or Web Components.

3. Configure AI Models:

Click the settings icon (gear) to choose an AI model (OpenAI, Ollama, Groq, etc.).

OpenUI mã nguồn mở tạo giao diện web

Local models like Llava appear if installed.

4. Save and Export:

  • Copy the generated HTML or converted code (React, Svelte, etc.).
  • Save UI designs for later use.

Important Notes

a) API Key: f you see AuthenticationError: Incorrect API key provided, verify your API key or set OPENAI_API_KEY=xxx for Ollama.

b) Performance: Local models like Llava require a strong machine (at least 16GB RAM).

c) Installation Issues:

  • On macOS (M1), if weave causes errors, delete .python-versionfiles in the root and backend folders, then recreate the virtual environment.
  • Ensure a stable internet connection for cloning or installing dependencies

d) Customization:

  • Use LiteLLM to connect to additional AI models. Check LiteLLM’s documentation for proxy setup.
  • Run on Gitpod or Codespaces, see .gitpod.yml in source code
Share it up

Let TENTEN AI
Accompany you on your journey
digital conversion.

Sign up to receive consultation

Sales department:
(8:00 a.m. - 5:30 p.m.)
Customer care department:
(8:00 a.m. - 5:30 p.m.)
Technical Support(24/7):
( 8:00 a.m. - 5:30 p.m.)
Invoice support:(8:00 a.m. - 5:30 p.m.)
Extension support (8:00 a.m. - 5:30 p.m.)