Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

Academic Year: 2023-2024

Assessment Introduction:

Course:                                                               Module CodeEL3105

BEng Electronic Engineering                                Module Title: Computer Vision

BEng Robotic Engineering

MEng Robotic Engineering

Title of the Brief: Monocular Visual                    Type of assessment: Assignment

Odometry with Loop Correction

Introduction

This Assessment Pack consists of a detailed assignment brief, guidance on what you need to prepare, and information on how class sessions support your ability to complete successfully. The tutor responsible for this coursework will introduce this assignment on Tuesdays 23/01/2023 during computer vision class additional support for this assignment will be provided during scheduled lab sessions. You’ll also find information on this page to guide you on how, where, and when to submit. If you need additional support, please make a note of the services detailed in this document.

Submission details; how, when, and where to submit:

Assessment Release date: Tuesday, 23/01/2024

Assessment Deadline Date and time: Tuesday, 26/03/2024

Please note that this is the final time you can submit not the time to submit!

You should aim to submit your assessment in advance of the deadline.

The Turnitin submission link on Blackboard, will be visible to you on 5/03/2024.

Feedback will be provided by: 10/05/2024

This assignment constitutes 50% of the total module assessment mark. You should write a report for this assignment documenting your solutions for the tasks defined in the assignment brief given below. The report should include a very short introduction describing the problem, description of your adopted solutions,a more extensive description of the results and conclusions section summarising the results. The report should be approximately 1500 words long, plus relevant materials (References and Appendices). You should use Harvard referencing system for this report. The report should be submitted electronically

to “Monocular Visual Odometry” Turnitin through Blackboard

You should submit a documented matlab/python code solving the given tasks. The code should be self- contained, i.e., it should be able to run as it is, without a need for any additional tools/libraries. In case, there are multiple files please create a single zip code archive containing all the files. The code should be submitted separately from the report into Blackboard EL3105 assignment area denoted as “Monocular Visual Odometry-Assignment Code”

Note:  If  you  have  any  valid  mitigating  circumstances  that  mean  you  cannot  meet  an  assessment submission deadline and you wish to request an extension, you will need to apply online, viaMyUCLan with your evidence prior to the deadline. Further information on Mitigating Circumstances viathis link.

We wish you all success in completing your assessment. Read this guidance carefully, and any questions, please discuss with your Module Leader.

Teaching into assessment

The assignment is to be introduced and discussed at the lecture on Tuesday 23rd of January. During that session the background of this assignment will be introduced; the data structure will be explained, and the  expected  results  will  be  elucidated  with  examples.  The  set  of software tools  available for the assignment will be also described. All the algorithmic aspects necessary for the successful completion of the assignment were or will be covered during the lectures, tutorials, and laboratory sessions. These include: keypoints detection, keypoints descriptor calculation, robust kypoints matching, fundamental matrix estimation, 3D points reconstruction and the camera pose estimation, and structure from motion algorithms.

Additional Support

All links are available through the onlineStudent Hub

1.   Our Library resources link can be found in the library area of the Student Hub or via your subject

librarian atSubjectLibrarians@uclan.ac.uk. (Mr. Neil MarshallNMarshall7@uclan.ac.uk)  2.  Support with your academic skills development (academic writing, critical thinking and referencing) is available through WISER on the Study Skills section of theStudent Hub.

3.   For help with Turnitin, seeBlackboard and Turnitin Supporton the Student Hub

4.   If you have a disability, specific learning difficulty, long-term health or mental health condition,   and not yet advised us, or would like to review your support, Inclusive Support can assist with   reasonable adjustments and support. To find out more, you can visit the Inclusive Support page of theStudent Hub.

5.   For mental health and wellbeing support, please complete our online referral form, or email wellbeing@uclan.ac.uk. You can also call 01772 893020, attend a drop-in, or visit our UCLan Wellbeing Service Student Hub pagesfor more information.

6.   For any other support query, please contact Student Support viastudentsupport@uclan.ac.uk.

7.   For consideration of Academic Integrity, please refer to detailed guidelines in ourpolicy document . All assessed work should be genuinely your own work, and all resources fully cited.

8.  For this assignment, you are not permitted to use any category of AI tools.

Assignment Brief

This assignment is designed to give you an insight into selected aspects of computer vision applied to camera  calibration, visual  odometry,  and  structure  from  motion,  i.e.,  camera  pose  and  orientation estimation from a sequence of images taken by that camera. You are asked to solve various tasks including detection of image keypoints, their robust matching, camera pose estimation, and correction of the camera pose drift error. You are asked to write a computer vision software operating in a soft real-time as well as testing your solution and interpreting the results.

This assignment will enable you to:

•    Deepen your understanding of camera calibration, keypoints detection / matching, homography, fundamental matrix, and camera pose estimation.

•    Recognize software design challenges behind implementations of computer vision algorithms.

•    Design and optimise software to meet specified requirements.

•    Acquire a hands-on understanding of camera calibration and simultaneous localisation and mapping problems.

(These correspond to point 1, 2, 4 and 5 of the module learning outcomes. Module learning outcomes are provided in the Module Descriptor)

The assignment consists of two main tasks. The first task is to perform camera calibration using images stored in the CalibrationImages_MVO.zip  file. These calibration images were captured with a checkerboard calibration pattern placed at different positions and orientations. The size of the checkerboard square is 14.44mmx 14.44mm.

The second task is to estimate three-dimensional camera poses (position & orientation) for the sequence of images from the CVML Monocular Visual Odometry dataset stored in the CVML_MVO_Loop.zip file. These images were captured with varying camera position and orientation. The images in both the CalibrationImages_MVO and CVML_MVO_Loop were taken by the same camera. You are asked to write matlab programs to estimate intrinsic camera parameters using data in the CalibrationImages_MVO.zip file and subsequently estimate the camera pose for each corresponding image in the

CVML_MVO_Loop.zip sequence.

In visual odometry, an estimate of the global pose of the camera for the current frame tends to drift from the true pose due to matching errors between consecutive frames. If camera trajectory loops, shown the same part of the scene as before, this can be used to correct some of the camera pose drift errors. You to implement algorithm for such “loop closure” .

It is essential that you design your camera pose estimation algorithm, so it can be used in a sequential manner, i.e., when estimating the current camera pose only the current and preceding images can be used.

The  CalibrationImages_MVO_Loop.zip and CVML_MVO_Loop.zip files are available from  Blackboard EL3105 Assignment space.

References:

Hartley, R. and Zisserman, A. (2003), Multiple View Geometry in Computer Vision, Cambridge University Press.

Szeliski, R.. (2022), Computer Vision: Algorithms and Applications”, Springer, Chapter 7 Structure from Motion (pp. 345-377).

Bay, R., Tuytelaars, T. and Gool, L.V. (2006), SURF: Speed Up Robust Features”, European Conference on Computer Vision, ECCV’2006, pp. 404-417.

Mikolajczyk, K. and Schmid, C. (2005), A performance evaluation of local descriptors, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 27, Issue 10.

B. Triggs, et al. (2002) Bundle Adjustment A Modern Synthesis, International Workshop on Vision Algorithms.

Matlab help:

“ Monocular Visual Odometry”

“Monocular Visual Simultaneous Localization and Mapping

Late work

Work submitted electronically may be submitted after the deadline to the same Turnitin assignment slot and will be automatically flagged as late. Except where an extension of the hand-in deadline date has been approved lateness penalties will be applied in accordance with the University policy as follows:

(Working) Days          Late Penalty

1 - 5                            maximum mark that can be achieved: 40%

more than 5                  0% given

Marking scheme

Your report should contain the following elements; it will be marked in accordance with the following marking scheme:

Item

Weight (%)

1.   Camera calibration

30

2.   Camera Pose Estimation

30

3.   Drift error reduction (loop closure)

15

4.  Visualisation of the results

15

5.   Presentation of the report

10

Total

100

Feedback Guidance:

Reflecting on Feedback: how to improve.

From the feedback you receive, you should understand:

•    The grade you achieved.

•    The best features of your work.

•    Areas you may not have fully understood.

•    Areas you are doing well but could develop your understanding.

•    What you can do to improve in the future - feedforward.

Use theWISER: Academic Skills Development service. WISER can review feedback and

help you understand your feedback. You can also use the WISERFeedback Glossary

Next Steps:

•    List the steps have you taken to respond to previous feedback.

•    Summarise your achievements

•    Evaluate where you need to improve here (keep handy for future work):