{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "26f73182", "metadata": {}, "source": [ "# Introduction to Scikit-Learn\n", "\n", "Scikit-Learn (sklearn) is a powerful Python package for machine learning. The goals of this tutorial are:\n", "\n", "1. To learn how to use Scikit-Learn to implement machine learning models.\n", "2. To understand the general structure of using the Scikit-Learn API.\n", "\n", "The main framework for implementing machine learning models in sklearn are:\n", "\n", "1. Import the sklearn objects you need for the code\n", "2. Prepare a set of preprocessed (namely cleaned and scaled) data to give your model.\n", "3. Create the model object in your code.\n", "4. Use the model object to train your model using the appropriate training method (usually `fit()`)\n", "5. Apply model to data that the model has not seen (test data) using the appropriate prediction/transformation method (usually `predict()`)\n", "\n", "Understanding this structure and the methods within the sklearn objects that accomplish this are all you need in order to work with sklearn.\n", "\n", "In this tutorial we will cover features of sklearn that allow you to:\n", "\n", "- Load and preprocess data\n", "- Implement supervised learning models\n", "- Implement unsupervised learning models\n", "\n", "This notebook introduces these concepts with example code cells. Attendees are expected to follow along and execute the code cells themselves. I will explain what each of the commands do in the code blocks. \n", "\n", "I have included a few practice examples at the end of the tutorial." ] }, { "cell_type": "code", "execution_count": null, "id": "3944fa42", "metadata": {}, "outputs": [], "source": [ "# Import sklearn and print the version\n", "import sklearn\n", "print(sklearn.__version__)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6a9d3fc2", "metadata": {}, "source": [ "## Data preprocessing\n", "\n", "Data preprocessing is an essential step before applying any machine learning algorithm. In general, you are not handed a ready-to-use dataset. Datasets often contain incorrect data, missing data, and data with different scales adn types. \n", "\n", "Before you can extract useful information from the data through a machine learnign algorithm, you will need to preprocess the data. In this section we will demonstrate the following topics:\n", "\n", "- Loading datasets\n", "\n", " - Toy datasets\n", " - External datasets\n", " - Generated datasets\n", " - Real world dataset\n", " \n", "- Exploratory data analysis\n", "\n", " - Pandas tools\n", " - Basic visualization\n", " \n", "- Test train splits\n", "\n", "- Scaling datasets\n", "\n", " - Scaler object\n", " - Min-max scaling\n", " - Standardization" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e3f9f7a8", "metadata": {}, "source": [ "### Toy datasets\n", "\n", "Scikit learn provides some built in toy datasets. There is an easy API call to load these datasets. Scikit-learn's toy datasets make it easy to test out many kinds of machine learning algorithms. The list of datasets is at this [link](https://scikit-learn.org/stable/datasets/toy_dataset.html#toy-datasets). The following code cell shows how to import a built-in toy dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "71730686", "metadata": { "scrolled": true }, "outputs": [], "source": [ "from sklearn import datasets\n", "\n", "X = datasets.load_iris()\n", "\n", "print(X)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e61dea21", "metadata": {}, "source": [ "What kind of object is X? You can find this out by using the `type()` method." ] }, { "cell_type": "code", "execution_count": null, "id": "b7d98704", "metadata": {}, "outputs": [], "source": [ "print(type(X))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "cbb7b02b", "metadata": {}, "source": [ "This means we are working with an object of type `Bunch`. The `Bunch` object X has the following attributes:\n", "\n", "- `data`: the data matrix\n", "- `target`: the classification target\n", "- `feature_names`: the names of the dataset columns\n", "- `target_names`: the names o the target classes\n", "\n", "To access the 2 numpy 2 arrays that contain the data matrix and the target values you use these commands\n", "\n", "1. `X_data = X[\"data\"]` or `X.data`\n", "2. `X_target = X[\"target\"]` or `X.target`\n", "\n", "The same syntax works for `feature_names` or `target_names`.\n", "\n", "You can read more about this data type at this [link](https://scikit-learn.org/stable/modules/generated/sklearn.utils.Bunch.html). Let's see this in action" ] }, { "cell_type": "code", "execution_count": null, "id": "6fb781b5", "metadata": { "scrolled": true }, "outputs": [], "source": [ "print(X)" ] }, { "cell_type": "code", "execution_count": null, "id": "478de57b", "metadata": {}, "outputs": [], "source": [ "X_data = X[\"data\"]\n", "print(X_data)" ] }, { "cell_type": "code", "execution_count": null, "id": "3bc95ad4", "metadata": { "scrolled": true }, "outputs": [], "source": [ "X_target = X[\"target\"]\n", "print(X_target)" ] }, { "cell_type": "code", "execution_count": null, "id": "e76096d9", "metadata": {}, "outputs": [], "source": [ "print(type(X_data))\n", "print(type(X_target))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c09d7c83", "metadata": {}, "source": [ "It is also possible to load the data as a Pandas dataframe. Pandas is a Python package for storing and manipulating data. In Pandas, data is stored in a Dataframe object. The dataframe object stores data in a table (rows and columns). Additionally, the dataframe object has methods to manipulate and analyze the data it contains. " ] }, { "cell_type": "code", "execution_count": null, "id": "88669438", "metadata": { "scrolled": true }, "outputs": [], "source": [ "Z = datasets.load_iris(as_frame=True)\n", "print(Z)" ] }, { "cell_type": "code", "execution_count": null, "id": "40774d7f", "metadata": {}, "outputs": [], "source": [ "Z_data = Z[\"data\"]\n", "print(type(Z_data))\n", "print(Z_data.head())" ] }, { "cell_type": "code", "execution_count": null, "id": "0023c7f8", "metadata": {}, "outputs": [], "source": [ "Z_target = Z[\"target\"]\n", "print(type(Z_target))\n", "print(Z_target.head())" ] }, { "cell_type": "code", "execution_count": null, "id": "faacff2d", "metadata": {}, "outputs": [], "source": [ "Z_names = Z[\"target_names\"]\n", "print(Z_names)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "b2cb374d", "metadata": {}, "source": [ "## Loading other datasets\n", "\n", "In general you do not develop machine learning applications with a toy dataset. Instead your dataset comes from a database or a file (e.g., csv, Excel). Scikit-learn offers limited tools to import files. This means you have to use other tools, like Pandas to import your file into dataframes or arrays. To load a `.csv` file you can use the Pandas method `read_csv()`, see this [link](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html) for the full documentation." ] }, { "cell_type": "code", "execution_count": null, "id": "140c2a7a", "metadata": { "scrolled": true }, "outputs": [], "source": [ "import pandas as pd\n", "print(pd.__version__)" ] }, { "cell_type": "code", "execution_count": null, "id": "6bf71989", "metadata": { "scrolled": false }, "outputs": [], "source": [ "df = pd.read_csv(\"datasets/Salaries.csv\")\n", "print(df.head())" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4190df46", "metadata": {}, "source": [ "## Generating a dataset\n", "\n", "Scikit-learn has built in functions that allow you to create a random dataset These randomly generated datasets can then be used to explore various machine learning algorithms." ] }, { "cell_type": "code", "execution_count": null, "id": "f2858bc7", "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import make_blobs\n", "X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=0)" ] }, { "cell_type": "code", "execution_count": null, "id": "a27dd649", "metadata": {}, "outputs": [], "source": [ "print(X.shape)" ] }, { "cell_type": "code", "execution_count": null, "id": "54cdc0b5", "metadata": { "scrolled": true }, "outputs": [], "source": [ "print(y)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "b5b9f603", "metadata": {}, "source": [ "We can plot the data that we just created using the [matplotlib](https://matplotlib.org/). " ] }, { "cell_type": "code", "execution_count": null, "id": "7697c3e1", "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "colors = {0:'red', 1:'blue'};\n", "c_arr=[colors[k] for k in y]\n", "plt.scatter(X[:, 0], X[:, 1], marker=\"o\", c=c_arr, s=50, edgecolor=\"k\");" ] }, { "attachments": {}, "cell_type": "markdown", "id": "3da0c926", "metadata": {}, "source": [ "## Real world data sets\n", "\n", "Using scikit-learn you can also import real world datasets. The real world datasets are larger than the toy datasets." ] }, { "cell_type": "code", "execution_count": null, "id": "5610cc26", "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import fetch_california_housing\n", "RW = fetch_california_housing(as_frame=True)" ] }, { "cell_type": "code", "execution_count": null, "id": "518ce405", "metadata": {}, "outputs": [], "source": [ "RW_data = RW[\"data\"]\n", "RW_target = RW[\"target\"]" ] }, { "cell_type": "code", "execution_count": null, "id": "f91787ff", "metadata": { "scrolled": true }, "outputs": [], "source": [ "RW_data.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "994fe56c", "metadata": {}, "outputs": [], "source": [ "RW_target.head()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "68fe50c1", "metadata": {}, "source": [ "## Exploratory data analysis\n", "\n", "It is important to explore and understand your dataset prior to applying machine learning algorithms to it. There are a few functions in Pandas that are helpful for this. \n", "\n", "We will explore these functions using the iris dataset first. We will perform a few transformations on this dataset prior to the analysis." ] }, { "cell_type": "code", "execution_count": null, "id": "109fd0f1", "metadata": { "scrolled": true }, "outputs": [], "source": [ "## Data manipulation\n", "Z_df = Z.frame\n", "\n", "Z_df[\"target_names\"] =Z_df[\"target\"].replace(to_replace=\n", " {0: Z.target_names[0], \n", " 1: Z.target_names[1], \n", " 2: Z.target_names[2]})\n", "print(Z_df.head())" ] }, { "cell_type": "code", "execution_count": null, "id": "c8e00900", "metadata": {}, "outputs": [], "source": [ "## Pandas info\n", "Z_df.info()" ] }, { "cell_type": "code", "execution_count": null, "id": "7f8571bd", "metadata": { "scrolled": true }, "outputs": [], "source": [ "## Pandas describe\n", "Z_df.describe()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0c1600de", "metadata": {}, "source": [ "It is also very helpful to visualize variables of interest. We will use a visualization package called [Seaborn](https://seaborn.pydata.org/). Seaborn is intended for visualizing statistical data. In particular, most functions require as input a dataframe." ] }, { "cell_type": "code", "execution_count": null, "id": "421e53a8", "metadata": {}, "outputs": [], "source": [ "## Seaborn to visualize data\n", "import seaborn as sns\n", "print(sns.__version__)" ] }, { "cell_type": "code", "execution_count": null, "id": "3a28d951", "metadata": {}, "outputs": [], "source": [ "## Boxplot\n", "sns.boxplot(data=Z_df, x=\"target_names\", y=\"sepal length (cm)\");" ] }, { "cell_type": "code", "execution_count": null, "id": "2588daa0", "metadata": {}, "outputs": [], "source": [ "## Pairplot\n", "sns.pairplot(Z_df.drop(\"target\", axis=1), hue=\"target_names\");" ] }, { "attachments": {}, "cell_type": "markdown", "id": "27a62862", "metadata": {}, "source": [ "## Exercises\n", "Let's do some exploratory data analysis on the RW_data set." ] }, { "cell_type": "code", "execution_count": null, "id": "018c9403", "metadata": { "scrolled": false }, "outputs": [], "source": [ "### Determine if there are any null values in the RW_data dataframe ###\n", "RW_data.info()" ] }, { "cell_type": "code", "execution_count": null, "id": "05b91219", "metadata": {}, "outputs": [], "source": [ "### Compute the descriptive statistics of all the input features ###\n", "RW_data.describe()" ] }, { "cell_type": "code", "execution_count": null, "id": "248a02cc", "metadata": {}, "outputs": [], "source": [ "### Create a horizontal box plot RW_DATA\n", "### What is an important observation from this plot?\n", "### How is it a useful visualzation? How is it not a useful visualization?\n", "sns.boxplot(data=RW_data, orient=\"h\");" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6d3c48d4", "metadata": {}, "source": [ "## Dataset summary\n", "\n", "We have 4 datasets stored in our notebook which we summarize below\n", "\n", "- Iris dataset (toy dataset)\n", "- Generated dataset \n", "- California housing dataset (real world dataset)\n", "- Salaries dataset (toy dataset)\n", "\n", "\n", "We will used these datasets in subsequent cells of the notebook when we explore more techniques in Scikit-learn." ] }, { "attachments": {}, "cell_type": "markdown", "id": "ef0781d8", "metadata": {}, "source": [ "## Test train split\n", "\n", "In order to train and test your model you need to split your dataset into two sets:\n", "\n", "- training set\n", "- test set\n", "\n", "This is easy to accomplish with `sklearn.model_selection.train_test_split`.\n", "\n", "We will see how this function works using our generated dataset. This is because it is small and easy to confirm the expected behavior." ] }, { "cell_type": "code", "execution_count": null, "id": "69b11f2b", "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)" ] }, { "cell_type": "code", "execution_count": null, "id": "7f6f4fc4", "metadata": {}, "outputs": [], "source": [ "print(\"Shape of X_train: \", X_train.shape, \", Shape of X_test: \", X_test.shape)\n", "print(\"Shape of y_train: \", y_train.shape, \", Shape of y_test: \", y_test.shape)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "a411da82", "metadata": {}, "source": [ "## Scaling data\n", "\n", "We have now observed and explored a few different datasets. One important observation we made was that our datasets can have very different scales.\n", "\n", "Why is it not a good idea to run a machine learning algorithm on a dataset where the input features have scales that differ by orders of magnitude?\n", "\n", "What can we do to fix this? As the title of this section suggests, we will scale our data. This is a way to ensure that the numerical features of our datasets have scales that are of the same order of magnitude.\n", "\n", "Here are a few common ways to scale data:\n", "\n", " - Standard scaling\n", " - Min-Max scaling\n", " \n", "These (and others) are all implemented in scikit-learn. The methodology for using a scaling technique in scikit-learn is similar. You will always start by creating a scaler object and then use this object to scale your data." ] }, { "attachments": {}, "cell_type": "markdown", "id": "e9017186", "metadata": {}, "source": [ "### Standard Scaling\n", "Standard scaling scales data so that all the numerical features have zero mean and unit variance.\n", "\n", "Since the scales of the RW dataset were very different, we will apply the scaler to this dataset and then replot the box and whisker plot." ] }, { "cell_type": "code", "execution_count": null, "id": "9d1f20d1", "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import StandardScaler\n", "scaler = StandardScaler()\n", "RW_standard = scaler.fit_transform(RW_data)\n", "RW_standard = pd.DataFrame(RW_standard, columns=RW_data.columns)\n", "RW_standard.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "e1b1232a", "metadata": { "scrolled": false }, "outputs": [], "source": [ "sns.boxplot(data=RW_standard, orient=\"h\");" ] }, { "attachments": {}, "cell_type": "markdown", "id": "98eec37e", "metadata": {}, "source": [ "### Min-max scaling\n", "\n", "We can also do min-max scaling which puts everything in a range between 0 and 1." ] }, { "cell_type": "code", "execution_count": null, "id": "8cfe9640", "metadata": { "scrolled": true }, "outputs": [], "source": [ "from sklearn.preprocessing import MinMaxScaler\n", "scaler = MinMaxScaler()\n", "RW_minmax = scaler.fit_transform(RW_data)\n", "RW_minmax = pd.DataFrame(RW_minmax, columns=RW_data.columns)\n", "RW_minmax.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "257a6696", "metadata": {}, "outputs": [], "source": [ "sns.boxplot(data=RW_minmax, orient=\"h\");" ] }, { "attachments": {}, "cell_type": "markdown", "id": "0532e6b3", "metadata": {}, "source": [ "## Supervised learning\n", "\n", "In this section we will cover supervised learning algorithms in Scikit-Learn. Supervised learning is a machine learning technique where we train a model using *labeled* data. This trained model can then be used to predict values on new data.\n", "\n", "There are two broad categories of supervised learning:\n", "\n", "- Regression, when the model predicts continuous variables\n", "- Classification, when the model segments data into classes\n", "\n", "Scikit-learn has a standardized API which makes it easy to train different models with very similar pieces of code. Generally you will create an object for the model that you want, e.g., `LinearRegression`or `LogisticRegression`. These objects have all the methods you need to train your model and then predict values on your model.\n", "\n", "We will cover the following supervised learning methods:\n", "\n", "- Linear regression (regression)\n", "- Logistic regression (classification)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "775e1932", "metadata": {}, "source": [ "### Notation\n", "We introduce notation and the general ideas behind \n", "\n", "- A pair $(x^{(i)}, y^{(i)})$ is called a training example\n", "- A set $\\{(x^{(i)}, y^{(i)})\\}_{i=1}^{m}$ is called a training set\n", "- The goal is to find a function $h(x)$ that is good at predicting targets $y$\n", "- Assume $\\hat{y} = h_{w}(x)$ depends on a parameter $w$ (or parameters $w_{i}$ if $x$ is a vector)\n", "- Use the labeled training set to *learn* the parameter(s) $w$ for the function $h_{w}(x)$\n", "- The fully trained $h_{w}(x)$ is referred to as a *model*" ] }, { "attachments": {}, "cell_type": "markdown", "id": "6ed35ae1", "metadata": {}, "source": [ "## Linear regression\n", "\n", "To create a linear regression model in scikit-learn you will instantiate the [LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) object.\n", "\n", "We use the scaled Calfornia housing dataset to demonstrate how to create this object, train the model, and then predict on the test set." ] }, { "cell_type": "code", "execution_count": null, "id": "993f83a3", "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LinearRegression\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.datasets import fetch_california_housing\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.metrics import mean_squared_error\n", "\n", "# Fetch data\n", "RW = fetch_california_housing()\n", "X = RW[\"data\"]\n", "y = RW[\"target\"]\n", "\n", "# Split\n", "X_train, X_test, y_train, y_test = train_test_split(X, y)\n", "\n", "# Standardize\n", "scaler = StandardScaler()\n", "X_train = scaler.fit_transform(X_train)\n", "X_test = scaler.fit_transform(X_test)\n", "\n", "# Create regression model\n", "reg = LinearRegression()\n", "# Train\n", "reg.fit(X_train, y_train)\n", "\n", "# Predict on test set\n", "y_pred = reg.predict(X_test)\n", "\n", "# R^2 value\n", "r2 = reg.score(X_test, y_test)\n", "print(\"The R^2 score is : \", r2)\n", "\n", "# Report Mean Square Error (mse)\n", "mse = mean_squared_error(y_test, y_pred)\n", "print(\"Mean squared error: \", mse)" ] }, { "cell_type": "code", "execution_count": null, "id": "7e195848", "metadata": {}, "outputs": [], "source": [ "## Logistic Regression\n", "\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.datasets import load_iris\n", "from sklearn.preprocessing import StandardScaler\n", "from sklearn.metrics import classification_report\n", "\n", "# Fetch data\n", "Iris = load_iris()\n", "X = Iris[\"data\"]\n", "y = Iris[\"target\"]\n", "\n", "# Split\n", "X_train, X_test, y_train, y_test = train_test_split(X, y)\n", "\n", "# Standardize\n", "scaler = StandardScaler()\n", "X_train = scaler.fit_transform(X_train)\n", "X_test = scaler.fit_transform(X_test)\n", "\n", "# Create regression model\n", "reg = LogisticRegression()\n", "# Train\n", "reg.fit(X_train, y_train)\n", "\n", "# Predict on test set\n", "y_pred = reg.predict(X_test)\n", "\n", "# Classification report\n", "print(classification_report(y_test, y_pred, target_names=Iris.target_names))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "85906013", "metadata": {}, "source": [ "## Unsupervised learning\n", "\n", "In this section we will cover unsupervised learning algorithms in Scikit-Learn. Unsupervised learning is a machine learning technique where we train a model using *un-labeled* data. With unsupervised learning algorithms you are extracting information from the data itself without any labels.\n", "\n", "Somes examples of unsupervised learning techniques that we cover are:\n", "\n", "- Clustering\n", "- Principal Component Analysis (PCA)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e9b6d5f1", "metadata": {}, "source": [ "## Clustering\n", "\n", "Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters).\n", "\n", "Here we will consider K-means clustering, where we will cluster objects into k-clusters. The clusters will be formed by determimning centroids of each cluster, then membership to the cluster is determined by an observations shortest distance to the centroid.\n", "\n", "For this problem we will work with a generated dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "ababcd6d", "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import make_blobs\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.cluster import KMeans\n", "\n", "# Generate data with 2 clusters\n", "X, y = make_blobs(n_samples=500, centers=2, n_features=2, random_state=10)\n", "\n", "# Create bcluster object\n", "cluster = KMeans(n_clusters=2, n_init=\"auto\");\n", "\n", "# Train cluster model\n", "cluster.fit(X);\n", "\n", "print(\"Cluster centers: \", cluster.cluster_centers_)" ] }, { "cell_type": "code", "execution_count": null, "id": "32024a97", "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "colors = {0:'red', 1:'blue'};\n", "c_arr=[colors[k] for k in y]\n", "plt.scatter(X[:, 0], X[:, 1], marker=\"o\", c=c_arr, s=25, edgecolor=\"k\");\n", "plt.scatter(cluster.cluster_centers_[:, 0], cluster.cluster_centers_[:, 1], marker='*', s=100, c='y');" ] }, { "attachments": {}, "cell_type": "markdown", "id": "3b4bfe38", "metadata": {}, "source": [ "## Principal component analysis (PCA)\n", "\n", "PCA is an unsupervised machine learning algorithm that helps to reduse the dimension of your data. The dimension of your data is the number of input features. This algorithm finds a reduced set of input features in the data that account for the majority of the variance in the data. This means that you can work with a smaller set of input features (smaller data), but you are not losing the important information from the full set of input features." ] }, { "cell_type": "code", "execution_count": null, "id": "8fc97084", "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_iris\n", "from sklearn.decomposition import PCA\n", "from sklearn.discriminant_analysis import LinearDiscriminantAnalysis\n", "\n", "iris = load_iris()\n", "print(\"Feature names: \", iris.feature_names)" ] }, { "cell_type": "code", "execution_count": null, "id": "0ad9f98d", "metadata": {}, "outputs": [], "source": [ "X = iris.data\n", "y = iris.target\n", "target_names = iris.target_names\n", "\n", "pca = PCA(n_components=2)\n", "#\n", "X_r = pca.fit_transform(X)" ] }, { "cell_type": "code", "execution_count": null, "id": "e2ed33ac", "metadata": {}, "outputs": [], "source": [ "plt.figure()\n", "colors = [\"navy\", \"turquoise\", \"darkorange\"]\n", "lw = 2\n", "\n", "for color, i, target_name in zip(colors, [0, 1, 2], target_names):\n", " plt.scatter(\n", " X_r[y == i, 0], X_r[y == i, 1], color=color, alpha=0.8, lw=lw, label=target_name\n", " )\n", "plt.legend(loc=\"best\", shadow=False, scatterpoints=1)\n", "plt.xlabel(\"PC 1\")\n", "plt.ylabel(\"PC 2\")\n", "plt.title(\"PCA of IRIS dataset\");" ] }, { "attachments": {}, "cell_type": "markdown", "id": "49000878", "metadata": {}, "source": [ "## Summary\n", "\n", "We have covered many topics in this tutorial. We have seen how to preprocess data and train supervised and unsupervised machine learning. We can also compute simple metrics tos evaluate the performance of these models. Hopefully this has given you a better idea of how to use sklearn.\n", "\n", "\n", "Remember that the main framework for working with sklearn has the following structure:\n", "\n", "1. Import the sklearn objects you need for the code.\n", "2. Prepare a set of preprocessed (namely cleaned and scaled) data to give your model (usually in the form of numpy arrays).\n", "3. Create the model object in your code.\n", "4. Use the model object to train your model using the appropriate training method (usually `fit()`).\n", "5. Apply the model to data that the model has not seen (test data) using the appropriate prediction/transformation method (usually `predict()`).\n", "\n", "This structure and knowing the methods within the various data and model classes that accomplish this are all you need in order to work with sklearn." ] }, { "attachments": {}, "cell_type": "markdown", "id": "d347f330", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "code", "execution_count": null, "id": "cbda895b", "metadata": {}, "outputs": [], "source": [ "## Exercise 1 ##\n", "# Using the Calfornia housing dataset:\n", "# Train a regression model only on these features\n", "# Evaluate the performance of this model using MSE\n", "# Does this reduced set of features give better performance than the full set of input features?" ] }, { "cell_type": "code", "execution_count": null, "id": "191e3f1c", "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "id": "dec21cc1", "metadata": {}, "outputs": [], "source": [ "## Exercise 2 ##\n", "# Load the breast cancer dataset\n", "# Create and train a random forest model on this dataset, call this object model1\n", "# Crete and train a logistic regression model on this dataset, call this object model\n", "# Evaluate the performance of both models\n", "# Which model is more accurate?\n", "# Which model is a better choice for this application and why?" ] }, { "cell_type": "code", "execution_count": null, "id": "f915d1cc", "metadata": {}, "outputs": [], "source": [ "## Exercise 3 ##\n", "# Generate a dataset with 4 blobs using these paramenters\n", "# Perform K-means clustering to cluster the 4 blobs using these parameters\n", "# Evaluate the model using this function\n", "# Try to get more than 90% accuracy on the model" ] }, { "cell_type": "code", "execution_count": null, "id": "5ac738cc", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 5 }