Analyzing camera feed in real-time using RedisAI, OpenCV-Python, and Redis plugins for Grafana
Can you spot Batman with the BatCamera powered by Grafana?
Two years ago, Redis Labs published “My Other Stack Is RedisEdge” article introducing real-time video analysis in Redis. Since then, we built Redis plugins for Grafana to visualize RedisTimeSeries directly without adapters for Prometheus. Also, a recently developed Image panel by Volkov Labs can display analyzed with AI encoded images on the dashboard directly from the database, eliminating requirements of the video server.
RedisEdge Real-time Video Analytics
- A video stream producer adds a captured frame to a Redis Stream.
- The new frame triggers the execution of a RedisGears script on the Redis server that:
- Downsamples the frame rate of the input stream and prepares the input frame to the model’s requirements.
- Calls RedisAI to execute an object recognition model (YOLO) on the frame.
- Stores the model’s outputs frames per second, profiler information, recognized people in RedisTimeSeries, and analyzed with AI results in Redis Streams.
3. A video web server renders the final image based on real-time data from Redis’ Streams.
4. Time series are exported from Redis to Prometheus using an adapter.
5. Video web server and Prometheus metrics visualized in Grafana’s dashboards.
In this article, we are streamlining to keep it simple and limit the number of issues that can break the workflow with three services:
- Video Producer to capture video frames
- Redis to process and create base64 encoded analyzed with AI images
- Grafana with Redis plugins to display the final image and metrics.
All scripts and the Grafana dashboards are available in the GitHub repository.
Video stream producer
For the video stream producer, we used Raspberry Pi4 (Your tiny, dual-display, desktop computer) with installed Grafana to display the final images and metrics. Grafana has a tutorial on installing Grafana on Raspberry Pi if you haven't tried it yet.
Raspberry Pi4 should have a camera module installed to capture the live feed. To grab the live feed and save it as Redis Streams, you will need to install requirements and run the Python script:
# pip3 install -r requirements.txt
# python3 edge-camera.py -u redis://redis:6379 --fps 6 --rotate-90-clockwise true
The script accepts various parameters:
-u RedisURL
to provide Redis URL with RedisGears, RedisAI, and RedisTimeSeries (redismod
orredis-opencv
docker images)--fps
to specify the number of frames per second to capture--rotate-90-clockwise true/false
to rotate the frame, if required--maxlen
,--height
,--width
to set a maximum number of entries in the stream and the height/width of the frame.
After starting the script, you should see the output with Stream’s Id and the jpeg file size.
Connected to Redis: ParseResult(scheme='redis', netloc='redis:6379', path='', params='', query='', fragment='')id: b'1622741011514-0', size: 5510id: b'1622741011677-0', size: 10661id: b'1622741011838-0', size: 5670id: b'1622741012008-0', size: 11819
Redis-OpenCV docker image
To resize and create final images, the RedisGears script uses OpenCV-Python — a library of Python bindings designed to solve computer vision problems. This library should be installed using RedisGears and registered in the Redis database, which can take a while depends on your platform:
rg.pyexecute GB().run() requirements opencv-python
We created a Docker image based on the latest version of RedisTimeSeries, RedisGears with OpenCV pre-installed, and RedisAI. This Docker image is automatically built every night using GitHub Action and can be used for any computer vision project.
To start Redis container using Redis-OpenCV image run:
$ docker run -p 6379:6379 --name=redis-opencv ghcr.io/redisgrafana/redis-opencv:latest
Execute RG.PYDUMPREQS
command to validate that OpenCV-Python and other dependencies loaded correctly:
A new “Available Requirements” panel is added in the latest version of the Redis Application plugin as a part of the RedisGears dashboard and helps display the requirements:
The project contains a Docker compose configuration file recommended to start Redis and Grafana services and provision data source with dashboards:
version: "3.4"services:
redis:
container_name: redis
image: ghcr.io/redisgrafana/redis-opencv:latest
ports:
- 6379:6379 grafana:
container_name: grafana
image: ghcr.io/redisgrafana/redis-app:latest
ports:
- "3000:3000"
environment:
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_BASIC_ENABLED=false
- GF_ENABLE_GZIP=true
- GF_USERS_DEFAULT_THEME=light
volumes:
- ./volkovlabs-image-panel:/var/lib/grafana/plugins/volkovlabs-image-panel
- ./provisioning:/etc/grafana/provisioning
- ./dashboards:/var/lib/grafana/dashboards
Loading YOLO model, PyTorch script
Support for the RedisAI module will be added to the Redis plugins in the upcoming release. Until then, AI models and scripts should be loaded manually from the command line using AI.MODELSTORE
and AI.SCRIPTSET
. The command AI.MODELSTORE
was introduced in RedisAI 1.2 and replaced deprecated AI.MODELSET
:
cd ai/
cat tiny-yolo-voc.pb | redis-cli -h redis -x AI.MODELSTORE yolo:model TF CPU INPUTS 1 input OUTPUTS 1 output BLOBcat ai-yolo-script.py| redis-cli -h cluster.remote -x AI.SCRIPTSET yolo:script CPU SOURCE
To verify that the AI model and script are saved in the database run AI.MODELGET
and AI.SCRIPTGET
commands:
> ai.modelget yolo:model META1) "backend"
2) "TF"
3) "device"
4) "CPU"
5) "tag"
6) ""
7) "batchsize"
8) (integer) 0
9) "minbatchsize"
10) (integer) 0
11) "inputs"
12) 1) "input"
13) "outputs"
14) 1) "output"> AI.SCRIPTGET yolo:script META1) "device"
2) "CPU"
3) "tag"
4) ""
Registering RedisGears script
RedisGears StreamReader script gears-yolo.py
should be registered via RG.PYEXECUTE
command in redis-cli
or RedisGears script editor panel in Camera processing Grafana dashboard, included in this project.
Display analyzed with AI images in Grafana
The latest analyzed image and additional data stored in Redis Stream can be retrieved using the XREVRANGE
command, where boxes
and people
will contain coordinates and the number of people recognized in the frame. The field img
will contain base64 encoded image, which can be displayed on the Camera video producer and monitoring workstation:
> xrevrange camera:0:yolo + - count 11) 1) "1622741013280-0"
2) 1) "ref"
2) "1622741013171-0"
3) "boxes"
4) "[X, X, X, X]"
5) "people"
6) "N"
7) "img"
8) "/9j/4AAQSkZJRgABAQAA/Base64-encoded-image"
The Camera and Camera processing dashboards were created to display analyzed camera feed and metrics in real-time using Streaming mode in Grafana. Streaming mode supported by Redis Data Source allows refreshing only a specific panel instead of the whole dashboard and shows the camera feed in real-time with a slight delay.
To display base64 encoded images from any data source, we created a custom panel. This panel is registered in the Grafana repository and is available to use to display base64 encoded and bytes images in various formats.
What’s next?
This is just the beginning of this project. When support for RedisAI commands and custom panels will be added to the Redis plugins, it will be effortless to upload AI models and modify scripts. Having a friendly UI will help to experiment with various models. I plan to use the latest version of the YOLO model and try PyTorch translation for YOLO v3 and v5 for faster processing and accuracy.
Join our GrafanaCONline 2021 session “Plugin showcase: Building a single pane of observability glass” to learn more about Grafana plugins and see the demo of this project.
We recently published a “New plugins connect almost all of Redis for monitoring and visualization in Grafana” blog post, which is a great starting point for using Redis plugins.
Volkov Labs is an agency founded by long-time Grafana contributor Mikhail Volkov. We find elegant solutions for non-standard tasks.
Check out the latest plugins and projects at https://volkovlabs.io