In today’s article, I’ll demonstrate how to use the Amazon Rekognition SDK to detect people and vehicles in images with varying resolutions and quality.
Use case
Recently, I was tasked with showcasing Amazon Rekognition’s ability to detect people and vehicles in images. The goal was to identify non-compliance in the provided photos, such as detecting people or vehicles in restricted areas. Upon detection, an event is triggered, which then sends a notification. However, for this tutorial, we will focus solely on using Amazon Rekognition for detection.
Initial Setup
I began by setting up my code with the necessary library imports and retrieving credentials using a predefined profile.
import boto3
import os
from PIL import Image, ImageDraw, ImageFont
# choose profile and initiate boto3 session
profile_name = "sandbox"
session = boto3.Session(profile_name=profile_name)
You can configure a profile with your credentials as follows:
Initializing Services
Next, I initialized the S3 and Rekognition services and specified the S3 bucket where the images are stored.
# Initialize the S3 and Rekognition clients with the correct region
region = 'us-east-1' # Replace with the correct region of your services
s3 = session.client('s3', region_name=region)
rekognition_client = session.client('rekognition', region_name=region)
# S3 bucket and folder containing the images
bucket_name = 'bucket-name'
folder_name = 'folder-name' # se necessário
Detecting Labels and Drawing Bounding Boxes
With the services initialized, I created a function to detect labels (such as people and vehicles) in images stored in S3. The function returns a bounding box around detected objects.
# Function to detect labels and return bounding boxes
def detect_labels_in_s3_image(bucket, image_key):
try:
response = rekognition_client.detect_labels(
Image={'S3Object': {'Bucket': bucket, 'Name': image_key}},
MaxLabels=10,
MinConfidence=80
)
return response['Labels']
except boto3.exceptions.Boto3Error as e:
print(f"Error detecting labels in {image_key}: {e}")
return []
def draw_bounding_boxes(image, labels):
draw = ImageDraw.Draw(image)
image_width, image_height = image.size
try:
# Load a font
font = ImageFont.load_default()
except IOError:
print("Error loading font.")
return
for label in labels:
for instance in label.get('Instances', []):
box = instance['BoundingBox']
left = image_width * box['Left']
top = image_height * box['Top']
width = image_width * box['Width']
height = image_height * box['Height']
# Define the bounding box coordinates
points = (
(left, top),
(left + width, top),
(left + width, top + height),
(left, top + height),
(left, top)
)
# Draw the bounding box
draw.line(points, fill='red', width=3)
# Add a label above the bounding box
text_position = (left, top - 12)
draw.text(text_position, label['Name'], fill='black', font=font)
Processing Images from S3
Finally, I processed each image stored in the S3 bucket. The script downloads each image, applies the detection function, draws bounding boxes around the detected objects, and saves the annotated images.
# Process each image in the S3 folder
try:
response = s3.list_objects(Bucket=bucket_name, Prefix=folder_name)
if 'Contents' in response:
for obj in response['Contents']:
image_key = obj['Key']
if image_key.lower().endswith(('.png', '.jpg', '.jpeg')):
# Download image from S3
image_path = 'path/local/computer' + os.path.basename(image_key)
s3.download_file(bucket_name, image_key, image_path)
# Open the image
image = Image.open(image_path)
# Detect labels
labels = detect_labels_in_s3_image(bucket_name, image_key)
# Draw bounding boxes on the image
draw_bounding_boxes(image, labels)
# Save the new image
output_path = 'path/local/computer' + os.path.basename(image_key)
image.save(output_path)
print(f"Processed {image_key} with bounding boxes. Saved to {output_path}.\n")
else:
print(f"No images found in the folder '{folder_name}' in bucket '{bucket_name}'.")
except boto3.exceptions.Boto3Error as e:
print(f"Error listing objects in bucket '{bucket_name}': {e}")
Results
After processing the images using Amazon Rekognition, the results will include labels indicating detected objects, such as people and vehicles, along with their respective bounding boxes drawn on the images.




These annotated images, saved to your local file system, clearly highlight the detected objects, enabling easy identification of any non-compliance or anomalies. By automating this process, you can efficiently analyze large volumes of images, trigger alerts, and take necessary actions based on the detected results, ensuring compliance and enhancing security in your operations.