Installation
System requirements, detailed setup instructions, and deployment options for SparkLumina.
System Requirements
| Requirement | Version | |-------------|---------| | Node.js | 18.0 or newer | | npm | 9.0 or newer | | Git | Any recent version | | Browser | Chrome, Firefox, Safari, or Edge (latest) | | Docker | Optional, for containerized deployment | | Python | 3.10+ (for AI service only) |
Detailed Installation Steps
1. Clone and Enter the Project
git clone https://github.com/your-org/TeachBoardOpenEDU.git
cd TeachBoardOpenEDU
2. Install Dependencies
From the project root:
npm install
This installs all packages for the monorepo: the Vite-based whiteboard app, the Next.js website, and the Express backend.
3. Environment Configuration
Create a .env file from the example:
cp .env.example .env
Edit .env and configure these variables:
| Variable | Description | Example |
|----------|-------------|---------|
| LLM_BASE_URL | Base URL for your LLM provider (OpenAI-compatible API) | https://api.openai.com/v1 |
| LLM_API_KEY | API key for the LLM provider | sk-... |
| LLM_MODEL | Model name (e.g., GPT-4, Claude) | gpt-4 |
| AI_SERVICE_URL | URL of the Python AI service | http://localhost:8652 |
| BACKEND_URL | Backend API URL for the frontend | http://localhost:8056 |
4. Running Each Service
Frontend (port 8051):
npm run dev
# or the workspace-specific script for the whiteboard app
Backend (port 8056):
npm run dev:server
# or: node server/index.js
AI Service (port 8652):
cd ai-service
pip install -r requirements.txt
uvicorn main:app --port 8652
Start all three services before using AI features. The frontend connects to the backend for collaboration; the backend proxies AI requests to the Python service.
5. Optional: Docker Deployment
For production, you can containerize each service. Example Dockerfile patterns:
- Frontend: Build with Vite, serve static files via nginx.
- Backend: Node.js + Express on port 8056.
- AI Service: Python FastAPI on port 8652.
Use Docker Compose to orchestrate all services and set environment variables.
Troubleshooting
Port already in use
If port 8051, 8056, or 8652 is taken, change the port in the respective config or .env and ensure all services point to the updated URLs.
Socket.IO connection failed
Verify the backend is running and the BACKEND_URL (or equivalent) in the frontend matches the actual backend URL. Check CORS settings if you're accessing from a different origin.
AI features not working
Ensure the AI service is running on port 8652 and that LLM_BASE_URL, LLM_API_KEY, and LLM_MODEL are correctly set. The backend proxies requests to the AI service; inspect backend logs for errors.
Recording conversion (WebM → MP4) fails
The backend uses ffmpeg for conversion. Install ffmpeg on your system and ensure it's available in PATH.