COMP SCI 3316 7316 Assignment 2 Semester 2, 2022
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
COMP SCI 3316 7316 Combined Evol. Comp.
Semester 2, 2022
Assignment 2: The Dynamic Traveling Thief Problem
1 Overview
Assignments should be done in groups consisting of five to six (5–6) students. Each student has to take major responsibility for one of the exercises and collaborate with the team members on the remaining exercises. Each exercise needs one student taking major responsibility. The group has to make sure that the workload is evenly distributed. For Assignment 2 the minimum requirement is that the different algorithms for the (dynamic) TTP are implemented as described below and that they have been evaluated on the designed (dynamic) benchmarks.
2 Assignment
In this assignment the task is to design a library and to implement evolutionary algorithms for the traveling thief problem (TTP). The Traveling Thief Problem is defined as follows. Given is a set of cities N = {1, . . . , n} for which a distance dij , i, j e N between any pair of cities is known. Every city i but the first contains a set of items Mi = {1, . . . , mi }. Each item k positioned in the city i is characterized by its profit value pik and weight wik , thus the item Iik ~ (pik , wik ). The thief must visit each of the cities exactly once starting from the first city and return back to it in the end. Any item may be picked up into the knapsack in any city until the total weight of collected items does not exceed the maximum possible weight W . A renting rate R is to be paid per each time unit being on a way. υmax and υmin denote the maximal and minimum speeds that the thief can move, respectively. The goal is to find a tour and picking recommendation that yields the maximal objective value (profit - travel cost).
The TSP is a combination of two very famous NP-hard optimisation problems (the Travel- ing Salesperson Program and the Knapsack Problem) and you should design, implement,
and test evolutionary algorithms for this problem.
Additional information:
. The paper that defined the Traveling Thief Problem in 2013: http://dx.doi.org/
. The paper that describes a comprehensive benchmark set and some first algorithms in 2014: http://cs.adelaide.edu.au/~optlog/CEC2014Comp/2014gecco-ttp. pdf
● More information is available on the WCCI/CEC 2014 Competition Website http: //cs.adelaide.edu.au/~optlog/CEC2014Comp/ and on the TTP project page http://cs.adelaide.edu.au/~optlog/research/ttp.php
● In our scenarios City 1 is the starting point and end point of a tour, and there is no item at that city available. The first item Item 1 is available in City 2. This also holds for the dynamic benchmarks.
When designing your optimisation algorithms for the TTP, it is desirable to follow object- orientated design practices and to build a modular system. All programming work needs to be done in JAVA. We provide the implementation of the objective function.1 It is mandatory to use the function TTPInstance. evaluate from that package. Note that the package already provides an instance reader and some first optimisation approaches (a random local search and a (1+1)-EA).
Exercise 1 Problem-specific variation operators for the packing plan (20 marks)
The two approaches implemented in class Optimisation of our above-mentioned package randomly select or deselect an item. Try to outperform these two reference approaches. You can consider (simple/complex) local searches, problem-features such as certain ratios and distances, other evolutionary algorithms, purely mathematical approaches, . . .
The instances that you should test your approach on are the nine instances listed in the Instances and Code-Section of the competition page http://cs.adelaide.edu.au/
Notes:
● For this exercise, you are only allowed to modify the packing plan, and you need to use the TSP tours as provided in the folder instances of our package.
● The time limit for each optimisation is 10 minutes, and multi-threading is not allowed.
Submit a document Exercise1 .pdf in which you describe the operator(s). You may use diagrams and examples to support your arguments.
Exercise 2 Dynamic items (40 marks)
We now consider the case where the availability of items is dynamically changing and the tour is fixed as in Exercise 1. We consider the performance of algorithms over 100,000 generations. Unavailable items are not allowed to be selected for objective function eval- uation and corresponding bits have to be set to zero for the objective function evaluation.
● Each item can either be available or unavailable. Start with the whole set of items available and the best solution obtained in the previous exercise. Every 50 genera- tions each item changes its status (from available to unavailable or vice versa) with probability 5/m where m is the total number of items.
● In order to be able to compare algorithmic performance, you have to generate the sequence of sets of items available at each generation first and run the algorithms afterwards taking into account this sequence. You should generate 2 dynamic bench- marks for each of the nine benchmarks where each dynamic benchmarks consists of a sequence of sets of items generated as outlined above. To generate the benchmarks only store the generation number and the applied operation for each change.
● To evaluate the performance of your algorithms plot the quality of solutions ob- tained in each generation. Do 10 repeated runs on each instance for each algorithm considered in Exercise 1 and plot the average solution quality for each number of generations and each algorithm. Produce for each benchmark one figure showing the results of all algorithms.
● Design a new algorithm that is performing as best as possible on these benchmarks. All algorithms for dynamic problems have to be an online algorithms which can only take into account the changes made to the instance up to the generation it has already processed. They can not look into the ”future” . If you are using a population-based algorithm, the population size has to be at most 20. Plot the average quality per generation of 10 runs on each benchmark.
● Submit a document Exercise2.pdf in which you describe your best performing al- gorithm and the results on the benchmarks. You may use diagrams and examples to support your arguments.
Exercise 3 Dynamic Tours (40 marks)
We now consider the case where the tour is dynamically changing and all items are available. Start with the tour given in Exercise 1. You have to consider the performance of the algorithms over 100,000 generations.
● Generate 1 dynamic benchmark for each of the nine benchmarks of Exercise 1 where each dynamic benchmarks consists of a sequence of instances where every
50 generations the tour is changing by a randomly chosen exchange operation. To generate the benchmarks only store the generation number and the applied operation for each change.
● Generate 1 dynamic benchmark for each of the nine benchmarks of Exercise 1 where each dynamic benchmarks consists of a sequence of instances where every
500 generations the tour is changing by a randomly chosen 2-opt step. To generate the benchmarks only store the generation number and the applied operation for each change.
● Run your best performing algorithm of Exercise 2 on these benchmarks and show the average performance over 10 runs in the same way as for Exercise 2.
● Design a new algorithm that is performing as best as possible on these benchmarks. All algorithms for dynamic problems have to be an online algorithms which can only take into account the changes made to the instance up to the generation it has already processed. They can not look into the ”future” . If you are using a population-based algorithm, the population size has to be at most 20. Show the performance. Plot the average quality per generation of 10 runs on each benchmark.
● Submit a document Exercise3.pdf in which you describe your best performing al- gorithm and the results on the benchmarks. You may use diagrams and examples to support your arguments.
Exercise 4 Team work: who has done what? (zero points)
We would like each team member to write one paragraph about what each member has contributed to this assignment. We will not mark this, and it will not have any effect on the marking of the other exercises. We would like to encourage self-regulation within the group and cooperative learning. Submit a document Exercise4 .pdf in which each team member describe the own contribution.
3 Marking
Marking of implementations will be done according to the following criteria:
● correct overall implementation - 20%
● quality of the code (succinctness, object orientated, class structure) - 10%
● design of the operators and algorithms for the (dynamic) TTP and justification of the chosen parameters - 30%
● quality of the results achieved on the different problems and algorithm comparison
- 40%
4 Procedure for handing in the assignment
Work should be handed in using the course website. The submission must include:
● all source files: if your code does not compile or if it is not sufficiently well docu- mented, we will cap the code-related marks at 50%
● all configuration files
● a README.txt file containing
– instructions to run the code
– the names, student numbers, and email addresses of the group members
● a pdf containing all figures which show the performance of your algorithms.
● Exercise1.pdf, Exercise2.pdf, Exercise3.pdf, Exercise4.pdf as required in the exer- cises.
Failure to meet all requirements of the General procedure for handing in the assignment will lead to a reduction by twenty (20) marks .
Note: there will be a progress presentation session for this assignment. This is a great opportunity for you to get feedback on your progress, and to make last adjust- ments for the final submission. In these progress presentations, we are expecting each group to briefly outline their achievements.
2022-09-20
The Dynamic Traveling Thief Problem