QBUS2820 Predictive Analytics
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
QBUS2820 Predictive Analytics
Individual Assignment 1
Key information
1. Required submissions (through Canvas/Assignments/Individual Assignment 1)
a. ONE written report (word or pdf format)
b. ONE Jupyter Notebook .ipynb
2. Due date/time and closing date/time: See Canvas. The late penalty for the assignment is 5% of the assigned mark per day, starting after 23.59pm on the due date.
3. Weight: 30% of the total mark of the unit.
4. Length: The main text of your report should have a maximum of 10 pages with the usual font size 11- 12. You should write a complete report including sections such as business context, problem formulation, data processing, Exploratory Data Analysis (EDA), methodology, analysis, conclusions and limitations, etc.
5. If you wish to include additional material, you can do so by creating an appendix. There is no page limit for the appendix. Keep in mind that making good use of your audience’s time is an essential business skill. Every sentence, table and figure have to count. Extraneous and/or wrong material will reduce your mark no matter the quality of the assignment.
6. Anonymous marking: As the anonymous marking policy of the University, please only include your student ID in the submitted report, and do NOT include your name. The file name of your report and code file should follow the following format. Replace "SID" with your Student ID. Example: SID_Qbus2820_Assignment1.
7. Presentation/clarity is part of the assignment. Markers will allocate 10% marks for clarity of writing and presentation . Numbers with decimals should be reported to the fourth decimal point.
Key rules:
• Carefully read the requirements for each part of the assignment.
• Please follow any further instructions announced on Canvas.
• You must use Python for the assignment. Use "random_state= 1" when needed, e.g. when using “train_test_split” function of Python. For all other parameters that are not specified in the questions, use the default values of the corresponding Python functions.
• Reproducibility is fundamental in data analysis, so that you will be required to submit a Jupyter Notebook that generates your results. Not submitting your code will lead to a loss of 50% of the assignment marks.
• The notebook must run without errors and produce results consistent with the report when accessed through Kernel -> Restart & Run All from the Jupyter menu, assuming that the train and test datasets are in the same folder as the notebook. Failure to do so can results in a loss of up to 50% of the assignment marks.
• Failure to read information and follow instructions may lead to a loss of marks. Furthermore, note that it is your responsibility to be informed of the University of Sydney and Business School rules and guidelines, and follow them.
The Task
You will work on the News Popularity dataset .
1. Problem description
As a consultant working for a media firm, the company asks you to develop predictive models to predict the ‘popularity’ of news articles using data analysis techniques. The goal is to predict how popular an article is going to be before it is published online. This will provide valuable information such as pricing for ads, selecting the ‘best’ articles from a pool of candidates, etc. A secondary goal is to get an understanding of which factors drive popularity, so the company can adapt and improve around it (nudge the journalists towards certain writing styles or invest more in some topics than others).
To enable this task, you were provided with a dataset containing summary characteristics of previously published new articles, such as length in words, which day of the week it was published, rates of ‘positive’ and ‘negative’ words in the article, etc. Each article has a ‘popularity’ score that is what is required to predict.
Most variables come from ‘Sentiment Analysis’ a field within the broader Text Analytics, which can be understood as ‘automatic’ or ‘statistical’ tools for understanding of natural language.
Select three models to predict the target variable ‘popularity’ .
These models are:
• a linear regression model,
• a kNN regression model,
• A third model. This model can be any model of your choice that is not linear regression nor kNN (might even be a model not covered in the QBUS2820 unit). This is to encourage you to self-explore and self-study, since the ability of self-study is critical in the field of machine learning which is evolving rapidly.
As part of the contract, you need to write a report according to the details below.
2. Understanding the data
This task does not require prior knowledge about Text Analytics. You might be unfamiliar with the meaning of some of the variables in the dataset, and part of the assignment is to get familiar with them (at least get some superficial understanding of what they mean). As a data analyst, working with unfamiliar domains of application is the norm. You should use the techniques that we covered in the unit and you discovered to complete the prediction task.
You can download the dataset “news_pop.csv” from Canvas. The response variable is the popularity column in the dataset.
The description of the variables can be found in the file “news_pop_descr.txt”
Note that some variables in the dataset might not be suitable for prediction, part of the assignment is reasoning/justifying which variables are meaningful in a prediction context.
You should evaluate the performance of your model in the appropriate way, using Mean Absolute Error (MAE). Additionally, report on the accuracy of the predictive model in the percentage of times that it is able to predict if a given news article will have a popularity larger than 0.5.
The dataset comes from an original research paper:
Fernandes, K., Vinagre, P., & Cortez, P. (2015, September). A proactive intelligent decision support system for predicting the popularity of online news. In Portuguese conference on artificial intelligence (pp. 535-546). Springer, Cham.
The original data can be found in:
https://archive.ics.uci.edu/ml/datasets/online+news+popularity
but remember that the task is to ‘simulate’ the scenario mentioned in Section 1: Problem description.
3. Written report
The purpose of the report is to describe, explain, and justify your solution to the employer with a polished presentation. Be concise and objective. Find ways to say more with less. When in doubt, put it in the appendix. Below are some guidelines on how to work on the Task.
Preparation. You read and understood the assignment requirements and are aware that this is part of the assessment. You understand that machine learning is grounded in rigorous logic and theory that should inform your practical analysis. You understand that there is no single right solution and that trying different approaches and discovering empirically what works best for a particular problem is natural and desirable in this type of analysis.
Business context and problem formulation. The report includes a discussion of the context for the analysis, the problem and questions/hypotheses to be addressed, and how you plan to measure the success of your proposed solutions.
Data processing. You make sure that the dataset is free of errors and correctly processed for your analysis. You handle missing values and other issues appropriately. You describe the data processing steps in a clear and concise way.
Exploratory data analysis (EDA). Your report describes your EDA process, presenting only selected results. You studied key variables individually and pairwise using appropriate figures and descriptive statistics. You note any features of the data that are relevant for model building. You note the presence of outliers and any other anomalies that can affect the analysis. You explain the relevance of the EDA results to your subsequent modelling. Your EDA section in the report is concise, leaving additional figures and tables to the appendix if needed.
Variable selection. You describe and explain your process for variable selection. Your choices are justified by data analysis, domain knowledge, logic, and/or trial and error. Data- driven choices are better than opinion-based choices.
Methodology and modelling. You clearly describe and justify the models, methods, and algorithms in your analysis. The choice of methods is logically related to the assignment requirements, the substantive problem, underlying theoretical knowledge, and data analysis. This may involve systematic trial and error, but the report should focus on your final solutions. Your methodology pays attention to statistical variability. You report all crucial assumptions and check them as relevant via formal and informal diagnostics. You clearly recognise when an assumption is not satisfied or questionable. Some problems may be unfixable given the available data and methods. In this case you can identify what additional information or methodology could allow you to fix these problems.
Analysis and conclusions. Your analysis is rich. You correctly interpret the results and discuss how they address the substantive question. The reasoning from methodology and results to your conclusions is logical and convincing. You are not misled by overfitting. Your analysis pays attention to statistical variability. You make no claims for which you have no evidence. You do not make statements that imply causation when discussing associations. You explicitly acknowledge when limitations of the data or methods lead to uncertainty about your answer to the substantive question.
Writing. Your writing is concise, clear, precise, and free of grammatical and spelling errors. You use appropriate technical terminology. Your paragraphs and sentences follow a clear logic and are well connected. There is a clear distinction between the essential parts of the report and less important material (use the appendix). Your text refers to meaningful names for variables and subjects. If you use an abbreviation or label, you first have to define it.
Report. Your report is well organised and professionally presented and formatted, as if it had been prepared for a client later in your career. There are clear divisions between sections and paragraphs.
Tables. Your tables are appropriately formatted and have a clear layout. The tables have informative row and column labels. The tables are as much as possible easy to be understood on their own (in the real world, a significant part of your audience will skim-read by going straight to the tables). The tables do not contain information which is irrelevant to the discussion in your report. Your table is not an image. The tables are placed near the relevant discussion in your report. There is no text around your tables.
Figures. Your figures are easy to understand and have informative titles, captions, labels, and legends. The figures are well formatted and laid out. The figures are placed near the relevant discussion in your report. Your figures have appropriate definition and were directly saved from Python into an image file format. There is no text around your figures.
Numbers. All numerical results are reported to four-decimal point .
Referencing. You follow the Harvard Referencing System.
Python code. The code is presented in a neat and compact way. The code uses meaningful variable names and can be easily followed by someone with training in Python and statistics. Someone should be able to run your code and reproduce all the results that appear in your report. Your code has comments that clearly indicate which parts correspond to which sections of your report. You explicitly acknowledge when you borrow pieces of code from sources other than the lecture and tutorial materials.
2022-04-08