Quantcast
Channel: Python OpenCV - Template Matching using the live camera feed frame as input - Stack Overflow
Viewing all articles
Browse latest Browse all 3

Python OpenCV - Template Matching using the live camera feed frame as input

$
0
0

I'm starting on OpenCV again after I've tried it in Android some time ago. Now, I'm trying OpenCV 2 with Python 2. So far, I've been able to use it to get the live camera feed, and on a separate project, I've been able to implement template matching where I'll give a parent image and a small image that exists in the parent image, and match the sub images in the parent image and then output another image that draws a red rectangle on the image matches.

Here is the code for the Template Matching. It's nothing special, it's the same one from the OpenCV Site:

import cv2
import numpy as np
from matplotlib import pyplot as plt
img_rgb = cv2.imread('mario.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('mario_coin.png',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
    cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
cv2.imwrite('res.png',img_rgb)

Then as for my code for the Live Camera Feed, I have this:

# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

# allow the camera to warmup
time.sleep(0.1)

# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    # grab the raw NumPy array representing the image, then initialize the timestamp
    # and occupied/unoccupied text
    image = frame.array

    # show the frame
    cv2.imshow("Frame", image)
    key = cv2.waitKey(1) & 0xFF

    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break

And so far, both those code work nicely, independent of one another. What I tried was that I tried to insert the Template Matching code in the part before the Camera Stream Code displays anything.

Here was what I came up with:

from picamera.array import PiRGBArray
from picamera import PiCamera
from matplotlib import pyplot as plt

import time
import cv2
import numpy as np


# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))

template = cv2.imread('mario_coin.png', 0)


# allow the camera to warmup
time.sleep(0.1)

# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr",
                                       use_video_port=True):
    # grab the raw NumPy array representing the image,
    # then initialize the timestamp
    # and occupied/unoccupied text
    image = frame.array

    # we do something here
    # we get the image or something then run some matching
    # if we get a match, we draw a square on it or something
##    img_rbg = cv2.imread('mario.jpg')
    img_rbg = image

##    img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
    img_gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)



    w, h = template.shape[::-1]

    res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)

    threshold = 0.8

    loc = np.where(res >= threshold)

    for pt in zip(*loc[::-1]):
##        cv2.rectangle(img_rbg, pt, (pt[0] + w, pt[1] + h),
##                      (0,0,255), 2)
        cv2.rectangle(image, pt, (pt[0] + w, pt[1] + h),
                      (0,0,255), 2)

##    image = img_rgb


    # show the frame
    cv2.imshow("Frame", image)
    key = cv2.waitKey(1) & 0xFF

    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

    # if the `q` key was pressed, break from the loop
    if key == ord("q"):
        break

What I'm trying to do is that instead of the cv2.imread(sample.png), I'm trying to use the image input from the camera and use that in the template matching algorithm that I had earlier.

But what happens is that the camera opens for a second (indicated by the light), then it shuts off and the program halts.

I really don't know what's happening. Does anyone have any leads on how to use the live camera feed as an input in template matching?

I'm using a Raspberry Pi 2 with the v1.3 Camera.


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images