Reference no: EM132852297
CMP9135M Computer Vision - University of Lincoln
Learning Outcome 1: Critically evaluate and apply the theories, algorithms, techniques and methodologies involved in computer vision.
Learning Outcome 2: Design and implement solutions to a range of computer vision applications and problems, and evaluate their effectiveness.
Task 1: Image Segmentation and Detection
Download two files: ‘plant image dataset.zip' and ‘leaf_counts.csv' from Blackboard. Unzip the dataset file, you should obtain a set of 32 images. Among those images, there are 16 plant colour images and 16 corresponding leaf labelled images (ground-truth segmentation). Figure 1 shows an example of one plant image and its corresponding leaf labelled image. The "leaf_counts.csv" file contains the number of leaves for each image.
Please use image processing techniques to implement the following three tasks. Please note that you are encouraged to develop one model with same parameter settings for all the images.
Task 1.1: Automated plant object segmentation. For each image, automatically segment plant from background.
Task 1.2: Segmentation evaluation. For each plant image, calculate the Dice Similarity Score (DS) which is defined in Equation 1; where M is the segmented plant mask obtained from Task 1, and S is the corresponding ground-truth binary mask. Please note that, in this case, for the provided plant labelled images, you can convert the color images into binary images (e.g. plant object and background), and use the converted binary images as ground-truth mask.
DS = 2|M∩S|/|M|+|s|
The calculated DS shall be between 0 and 1. For example, DS is 1 if your segmentation matches perfectly with the ground-truth mask, whist DS is 0 if there is no overlap between your segmentation and ground-truth mask.
Task 1.3: Automated leaf detection. For each plant image, automatically detect leaf objects and count the number of leaves in the image.
Task 1.3: Automated leaf detection. For each plant image, automatically detect leaf objects and count the number of leaves in the image.
(a) Plant image (b) leaf labelled image (ground truth)
Your report should include: 1) For two plant images (‘plant_001', ‘plant_002'), you are required to put the original images, segmented plant image, the calculated DS value for each of the two images, and the detected leaf image and counting the number of leaves. Please note you are not required to properly segment all the leaves, you can use bounding box to show the location of each leaf; 2) for all the 16 plant images, please provide a bar graph with x-axis representing the number of the image, and y-axis representing the corresponding DS. 3) Calculate the mean of the DS for all the 16 images, and 4) Leaf detection performance evaluation. Please provide a Table showing the absolute difference of the automated leaf counts and the actual number of leaves provided in the csv file for each image, calculate the mean of the differences in leaf counts over all the 16 images. 5) briefly describe and justify the implementation steps.
Task 2: Feature Calculation
Download the Image (‘ImgPIA.jpeg') from Blackboard. This part of the assignment will deal with the area of Feature Extraction, in both the Frequency and Spatial domains.
Task 1: Read the image (‘ImgPIA.jpeg'), and select the features for both radius and direction as described in the Spectral Approach session of the Feature Extraction lecture. For additional marks you can change the values of radius and angle, and present those values in a plot or table.
Task 2: Read the image (ImgPIA.jpeg), and select features from the image histogram (i.e. 1st order), at least six (6) features from the co-occurance matrix (the original paper by Haralick has also made available to you), and at least five (5) features from the Gray Level Run Length matrix. Please note that both the co-occurance and GLRL based features can be directional and as a function of distance between pixel co-ordinates. For additional marks you can change the bit- depth of the image (i.e. 8, 6, 4 bit), and recalculate the features presenting them as a plot or table.
For both tasks analysis and discussion of your findings is expected.
Task 3: Object Tracking
Download from Blackboard the data files 'x.csv' and 'y.csv', which contain the real coordinates [x,y] of a moving target, and the files 'a.csv' and 'b.csv', which contain their noisy version [a,b] provided by a generic video detector (e.g. frame-to-frame image segmentation of the target).
Implement a Kalman filter with a software application that accepts as input the noisy coordinates [a,b] and produces as output the estimated coordinates [x*,y*]. For this, you should use a Constant Velocity motion model F with constant time intervals Δt = 0.1 and a Cartesian observation model H. The covariance matrices Q and R of the respective noises are the following:
1) You should plot the estimated trajectory of coordinates [x*,y*], together with the real [x,y] and the noisy ones [a,b] for comparison.
2) You should also assess the quality of the tracking by calculating the mean and standard deviation of the absolute error and the Root Mean Squared error (i.e. compare both noisy and estimated coordinates to the ground truth).
Attachment:- Computer Vision.rar