Real-Time Emotion Detection with AI
Advanced Emotion Analytics System using Deep Learning
Overview
Real-Time Emotion Detection is a state-of-the-art emotion recognition system that analyzes facial expressions in real-time using advanced deep learning models. This project combines multiple AI models (DeepFace, FER2013 Mini-XCEPTION) with MediaPipe face detection to deliver accurate, fast, and robust emotion analysis.
Key Features
- Multi-Model Ensemble: Combines DeepFace and custom-trained CNN models for superior accuracy
- Real-Time Performance: 60+ FPS with optimized frame processing and multithreading
- Professional UI: Cyberpunk-inspired HUD with live emotion probability bars
- 7 Emotion Classes: Detects Happy, Sad, Angry, Surprise, Fear, Disgust, and Neutral
- Video Recording: Built-in screen recording for content creation
- Modular Architecture: Clean, maintainable code structure following best practices
- Dataset Integration: Direct download from Kaggle FER2013 dataset
Quick Start
Prerequisites
- Python 3.8 or higher
- Webcam/Camera
- (Optional) GPU with CUDA for faster processing
Installation
-
Clone the repository
git clone https://github.com/Shayanthn/Real-Time-Emotion-Detection-with-OpenCV-DeepFace.git
cd Real-Time-Emotion-Detection-with-OpenCV-DeepFace -
Install dependencies
pip install -r requirements.txt -
Download pre-trained model (Optional)
Download the FER2013 Mini-XCEPTION model from here and place it in the project root directory.
-
Run the application
python main.py
Dataset Setup (Optional)
To train your own models or experiment with the FER2013 dataset:
-
Get Kaggle API credentials
- Go to Kaggle Account Settings
- Click "Create New API Token"
- Save the downloaded
kaggle.jsonto~/.kaggle/(Linux/Mac) orC:\Users\(Windows)\.kaggle\
-
Download the dataset
python scripts/download_dataset.py
For detailed Kaggle setup instructions, see KAGGLE_SETUP.md.
Controls
| Key | Action |
|---|---|
Q |
Quit application |
R |
Toggle video recording |
Project Structure
Real-Time-Emotion-Detection/
|
+-- main.py # Main application entry point
+-- requirements.txt # Python dependencies
+-- KAGGLE_SETUP.md # Kaggle dataset setup guide
|
+-- src/
| +-- __init__.py
| +-- config.py # Configuration settings
| |
| +-- core/
| | +-- __init__.py
| | +-- analyzer.py # Emotion analysis engine
| | +-- camera.py # Video stream handler
| |
| +-- ui/
| | +-- __init__.py
| | +-- visualizer.py # HUD and visualization
| |
| +-- utils/
| +-- __init__.py
| +-- fps_counter.py # FPS calculation utility
| +-- logger.py # Logging utility
|
+-- scripts/
| +-- download_dataset.py # Kaggle dataset downloader
|
+-- data/ # Dataset directory (created after download)
+-- LICENSE
Technical Details
Architecture
- Face Detection: MediaPipe Face Detection (faster and more accurate than Haar Cascades)
- Emotion Analysis:
- Primary: Custom FER2013 Mini-XCEPTION CNN
- Fallback: DeepFace with multiple backend support
- Performance Optimization:
- Frame throttling (analyze every N frames)
- Multithreaded analysis pipeline
- Efficient NumPy operations
Emotion Classes
The system recognizes 7 fundamental emotions based on Paul Ekman's research:
| Emotion | Color Code | Description |
|---|---|---|
| Happy | Yellow/Cyan | Joy, pleasure, satisfaction |
| Sad | Blue | Sorrow, grief, melancholy |
| Angry | Red | Irritation, rage, fury |
| Surprise | Magenta | Shock, amazement, astonishment |
| Fear | Orange | Anxiety, terror, apprehension |
| Disgust | Green | Revulsion, distaste, aversion |
| Neutral | Gray | No strong emotion detected |
Performance Metrics
- FPS: 60+ on modern CPUs (with GPU: 120+)
- Latency: < 50ms per frame
- Accuracy: ~65-70% on FER2013 test set
- Memory: ~500MB RAM usage
Configuration
Edit src/config.py to customize:
CAMERA_WIDTH = 1920
CAMERA_HEIGHT = 1080
FPS = 60
# Analysis Settings
ANALYSIS_INTERVAL = 0.1 # Seconds between emotion checks
ANALYSIS_THROTTLE = 3 # Analyze every N frames
# Visualization
SHOW_FPS = True
SHOW_GRAPH = True
THEME_COLOR = (0, 255, 255) # Cyan
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- DeepFace: serengil/deepface
- FER2013 Dataset: Kaggle FER2013
- MediaPipe: Google MediaPipe
- Mini-XCEPTION: oarriaga/face_classification
Author
Shayan Taherkhani
- Website: shayantaherkhani.ir
- GitHub: @Shayanthn
- Email: admin@shayantaherkhani.ir
Known Issues
- First run may be slow due to model loading
- Requires good lighting for optimal accuracy
- Multiple faces in frame: only the first detected face is analyzed
Future Enhancements
- Multi-face support
- Age and gender detection
- Emotion history timeline graph
- Export analysis data to CSV/JSON
- Web dashboard for remote monitoring
- Mobile app (iOS/Android)
- Cloud deployment (AWS/Azure)
Support
If you have any questions or need help, please:
- Check the Issues page
- Create a new issue with detailed information
- Contact via email: admin@shayantaherkhani.ir
Made with by Shayan Taherkhani
Star this repository if you find it helpful!