Analyzing Videos With Python
The analysis of videos is an important part of the current society, and the uses go further and further everyday, from security in the airports, to sports, cinema, transport, health, home automation, etc.
🤘 Let's build the next big thing on Internet. Get started.
Look closely at your environment, and maybe you can find a camera, and probably there is a system running on the background, that is analyzing all the content.
To analyze videos, the first question you should ask yourself is not ‘How’ but ‘What’. What I want to do? What do I want to analyze? What are my scopes or limits? For example, maybe you want to analyze balls? Perfect. What kind of analysis do you want to perform?
An example is to detect balls, but you can also track balls. But this does not stop here, there are a number of other options that you should consider before starting. For example, how many balls do you want to analyze, do you have a maximum, a minimum, what happens if those limits are ignored? What happens if a ball enters in the middle of the video, should you detect it or ignore it? These and many other are questions to consider when you try to analyze videos, and each one has a better solution that works perfectly for that situation, but maybe not for the rest.
That’s why we need to know what to track before start.
Once you defined your scope and limits, you can begin to investigate how to carry it out. We will start with a simple example, and from there we will add more things to analyze. And the simplest is to detect movement, a ball as in the previous example. At the moment we are not interested in anything other than finding our ball inside our video.
First, we create a Python virtual environment to separate the packages globally, and only install them locally on the environment.
> python3 -m venv env > source env/bin/activate
Now we are going to install the packages that we need, make sure you have updated pip to its latest version.
> pip install opencv-python > pip install opencv-contrib-python
This will install OpenCV in its latest versión( 4.1). With opencv-contrib you have access to many more features within the same cv2. Also, installing OpenCV also install Numpy by default.
First we will make the general structure, which is open a video and play it, nothing else. I’m going to use this video, which is just an orange ball rolling on a white background.
import cv2 import numpy as np from time import sleep class Detector: def __init__(self): self.height = 0 self.width = 0 def detect_movement(self): cap = cv2.VideoCapture('assets/vid_test.mp4') if not cap.isOpened(): raise ValueError('The path specified is not valid.') while cap.isOpened(): success, frame = cap.read() if not success: break # Your video analyzer goes here! cv2.imshow("Display the result", frame) cv2.waitKey(0) cap.release() # Don’t forget to release your video! if __name__ == "__main__": detector = Detector() detector.detect_movement() cv2.destroyAllWindows() # Close all the opened windows
Here’s the result!
Apply a Mask
Now let’s start with the analysis. First, we are going to apply a mask to our frame, so we can only see the ball.
lower_red = np.array([0, 130, 75]) upper_red = np.array([255, 255, 255]) hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) mask = cv2.inRange(hsv, lower_red, upper_red) contours, _ = cv2.findContours( mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE ) red = cv2.bitwise_and(frame, frame, mask=mask)
I created a red mask, so I can ignore the rest. If you put another object on the video, as long is not red, is going to be ignored! Since our ball is orange/red, is displayed. Check it out!
Difference between two frames
Now, we are going to see the difference between two frames. We need one frame to compare with the rest, I’m going to take the first frame.
Make sure you put this piece of code before the while.
_, first_frame = cap.read()
After that, we are going to add this line inside the while.
diff = cv2.absdiff(first_frame, frame)
Our first frame is the white background RGB(255, 255, 255), and our ball is orange, let’s say RGB(255, 127, 0). If we take the difference between these two colors, the result is light blue RGB(0, 128, 255). Our background is still white on the rest of the frames, so when we take the difference between RGB(255, 255, 255) and RGB(255, 255, 255) the result is RGB(0, 0, 0), black!
Here is the result:
Using a Detector
Now we are going to add a real detector. OpenCV has several options to detect objects. For example Haar cascades if you want to detect faces or pedestrians.
Our example is far simpler, so we are going to use one that fit our needs, for example SimpleBlobDetector. This one allow us to detect simple objects like balls.
def getBlobParams(self): params = cv2.SimpleBlobDetector_Params() params.filterByColor = True params.blobColor = 255 # 0 is white, 255 is black, on the grayscale return params detector = cv2.SimpleBlobDetector_create(self.getBlobParams()) keypoints = detector.detect(frame) im_with_keypoints = frame.copy() if keypoints: x_pt, y_pt = int(keypoints.pt), int(keypoints.pt) ball_diameter = round(keypoints.size) cv2.circle( im_with_keypoints, # Image output (x_pt, y_pt), # X, Y coordiantes int(ball_diameter/2), # Radius of the ball. (255, 0, 0), # Color, BGR. 3 # Thickness of the line. )
The blue circle around our ball means that our detector is working. If you check the original video, you will see a black border around the ball. I’m actually detecting this perfect circle instead of the ball, just by saying to my detector that I want to track something black, SimpleBlobDetector tracks circles by default.
This will give you the exact coordinates of the ball, which can be used on many things, like tracking.
As you can see, OpenCV is a really powerful tool that works for you, you just need to have a clear idea of what you want, and OpenCV does it for you!