149 lines
9.4 KiB
Plaintext
149 lines
9.4 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Learning Practice 2 for the University of Tulsa's QM-7063 Data Mining Course\n",
|
|
"# Dimension Reduction\n",
|
|
"# Professor: Dr. Abdulrashid, Spring 2023\n",
|
|
"# Noah L. Schrick - 1492657\n",
|
|
"\n",
|
|
"import heapq\n",
|
|
"from collections import defaultdict\n",
|
|
"\n",
|
|
"import pandas as pd\n",
|
|
"import matplotlib.pylab as plt\n",
|
|
"from mlxtend.frequent_patterns import apriori\n",
|
|
"from mlxtend.frequent_patterns import association_rules\n",
|
|
"\n",
|
|
"from surprise import Dataset, Reader, KNNBasic\n",
|
|
"from surprise.model_selection import train_test_split\n"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Problem 14.1\n",
|
|
"An analyst at a subscription-based satellite radio company has been given a sample of data from their customer database, with the goal of finding groups of customers who are associated with one another. The data consist of company data, together with purchased demographic data that are mapped to the company data (see Table 14.13). The analyst decides to apply association rules to learn more about the associations between customers. Comment on this approach.\n",
|
|
"\n",
|
|
"This is a good approach for exploring associative relationships between customers. Since there is company data mixed with demographic data, the association rules can yield better results and demonstrate better associations since purchases can be examined with respect to age, location, number of dependents, and any other demographic data available."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Problem 14.3\n",
|
|
"We again consider the data in CourseTopics.csv describing course purchases at Statistics.com (see Problem 14.2 and data sample in Table 14.14). We want to provide a course recommendation to a student who purchased the Regression and Forecast courses. Apply user-based collaborative filtering to the data. You will get a Null matrix. Explain why this happens."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Computing the cosine similarity matrix...\n"
|
|
]
|
|
},
|
|
{
|
|
"ename": "ZeroDivisionError",
|
|
"evalue": "float division",
|
|
"output_type": "error",
|
|
"traceback": [
|
|
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
|
"\u001b[0;31mZeroDivisionError\u001b[0m Traceback (most recent call last)",
|
|
"Cell \u001b[0;32mIn[28], line 16\u001b[0m\n\u001b[1;32m 14\u001b[0m sim_options \u001b[39m=\u001b[39m {\u001b[39m'\u001b[39m\u001b[39mname\u001b[39m\u001b[39m'\u001b[39m: \u001b[39m'\u001b[39m\u001b[39mcosine\u001b[39m\u001b[39m'\u001b[39m, \u001b[39m'\u001b[39m\u001b[39muser_based\u001b[39m\u001b[39m'\u001b[39m: \u001b[39mTrue\u001b[39;00m} \u001b[39m# compute cosine similarities between users\u001b[39;00m\n\u001b[1;32m 15\u001b[0m algo \u001b[39m=\u001b[39m KNNBasic(sim_options\u001b[39m=\u001b[39msim_options)\n\u001b[0;32m---> 16\u001b[0m algo\u001b[39m.\u001b[39;49mfit(trainset)\n\u001b[1;32m 17\u001b[0m \u001b[39m#pred = algo.predict(str(823519), str(30), r_ui=4, verbose=True)\u001b[39;00m\n",
|
|
"File \u001b[0;32m~/.local/lib/python3.10/site-packages/surprise/prediction_algorithms/knns.py:98\u001b[0m, in \u001b[0;36mKNNBasic.fit\u001b[0;34m(self, trainset)\u001b[0m\n\u001b[1;32m 95\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mfit\u001b[39m(\u001b[39mself\u001b[39m, trainset):\n\u001b[1;32m 97\u001b[0m SymmetricAlgo\u001b[39m.\u001b[39mfit(\u001b[39mself\u001b[39m, trainset)\n\u001b[0;32m---> 98\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39msim \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mcompute_similarities()\n\u001b[1;32m 100\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39m\n",
|
|
"File \u001b[0;32m~/.local/lib/python3.10/site-packages/surprise/prediction_algorithms/algo_base.py:248\u001b[0m, in \u001b[0;36mAlgoBase.compute_similarities\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 246\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mgetattr\u001b[39m(\u001b[39mself\u001b[39m, \u001b[39m\"\u001b[39m\u001b[39mverbose\u001b[39m\u001b[39m\"\u001b[39m, \u001b[39mFalse\u001b[39;00m):\n\u001b[1;32m 247\u001b[0m \u001b[39mprint\u001b[39m(\u001b[39mf\u001b[39m\u001b[39m\"\u001b[39m\u001b[39mComputing the \u001b[39m\u001b[39m{\u001b[39;00mname\u001b[39m}\u001b[39;00m\u001b[39m similarity matrix...\u001b[39m\u001b[39m\"\u001b[39m)\n\u001b[0;32m--> 248\u001b[0m sim \u001b[39m=\u001b[39m construction_func[name](\u001b[39m*\u001b[39;49margs)\n\u001b[1;32m 249\u001b[0m \u001b[39mif\u001b[39;00m \u001b[39mgetattr\u001b[39m(\u001b[39mself\u001b[39m, \u001b[39m\"\u001b[39m\u001b[39mverbose\u001b[39m\u001b[39m\"\u001b[39m, \u001b[39mFalse\u001b[39;00m):\n\u001b[1;32m 250\u001b[0m \u001b[39mprint\u001b[39m(\u001b[39m\"\u001b[39m\u001b[39mDone computing similarity matrix.\u001b[39m\u001b[39m\"\u001b[39m)\n",
|
|
"File \u001b[0;32m~/.local/lib/python3.10/site-packages/surprise/similarities.pyx:83\u001b[0m, in \u001b[0;36msurprise.similarities.cosine\u001b[0;34m()\u001b[0m\n",
|
|
"\u001b[0;31mZeroDivisionError\u001b[0m: float division"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"## Read in Course Topics data\n",
|
|
"courses_df = pd.read_csv('Coursetopics.csv')\n",
|
|
"\n",
|
|
"# Convert to format usable for surprise similarities\n",
|
|
"courses_df['Index'] = range(1, len(courses_df) + 1)\n",
|
|
"course_melt = courses_df.melt(id_vars =['Index'], value_vars =['Intro', 'DataMining', 'Survey', 'Cat Data', 'Regression', 'Forecast', 'DOE', 'SW'], \n",
|
|
" var_name ='Course', value_name ='Taken')\n",
|
|
"\n",
|
|
"\n",
|
|
"reader = Reader(rating_scale=(0, 1))\n",
|
|
"data = Dataset.load_from_df(course_melt[['Index', 'Course', 'Taken']], reader)\n",
|
|
"trainset = data.build_full_trainset()\n",
|
|
"\n",
|
|
"# NOTE: The following will error. This is expected and part of the question. Explanation in the corresponding answer.\n",
|
|
"sim_options = {'name': 'cosine', 'user_based': True} # compute cosine similarities between users\n",
|
|
"algo = KNNBasic(sim_options=sim_options)\n",
|
|
"algo.fit(trainset)\n",
|
|
"#pred = algo.predict(str(823519), str(30), r_ui=4, verbose=True)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The provided dataset is composed of Boolean values for \"have taken\" or \"have not taken\" various courses. The dataset represents \"have not taken\" with a 0, and \"taken\" with a 1. The dataset is considered a sparse matrix, since each user has only taken a few of the listed courses. Due to the sparsity, when computing the cosine between users, many computations involve comparing a user's \"not taken\" course to another user's \"not taken\" course. This leads to difficulties with the cosine computation since the denominator will be zero, causing a float division error. This can be remedied by using \"NULL\" values, which are supported in the surprise package."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Problem 14.4\n",
|
|
"The data shown in Table 14.15 and the output in Table 14.16 are based on a subset of a dataset on cosmetic purchases (Cosmetics.csv) at a large chain drugstore. The store wants to analyze associations among purchases of these items for purposes of point-of-sale display, guidance to sales personnel in promoting cross-sales, and guidance for piloting an eventual time-of-purchase electronic recommender\n",
|
|
"system to boost cross-sales. Consider first only the data shown in Table 14.15, given in binary matrix form.\n",
|
|
" a. Select several values in the matrix and explain their meaning.\n",
|
|
" b. Consider the results of the association rules analysis shown in Table 14.16.\n",
|
|
" i. For the first row, explain the “confidence” output and how it is calculated.\n",
|
|
" ii. For the first row, explain the “support” output and how it is calculated.\n",
|
|
" iii. For the first row, explain the “lift” and how it is calculated.\n",
|
|
" iv. For the first row, explain the rule that is represented there in words.\n",
|
|
" c. Now, use the complete dataset on the cosmetics purchases (in the file Cosmetics.csv). Using Python, apply association rules to these data (for apriori use min_support=0.1 and use_colnames=True, for association_rules use default parameters).\n",
|
|
" i. Interpret the first three rules in the output in words.\n",
|
|
" ii. Reviewing the first couple of dozen rules, comment on their redundancy and how you would assess their utility."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.9"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "767d51c1340bd893661ea55ea3124f6de3c7a262a8b4abca0554b478b1e2ff90"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|