{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Processing data with pandas II\n", "\n", "```{attention}\n", "Finnish university students are encouraged to use the CSC Notebooks platform.
\n", "\"CSC\n", "\n", "Others can follow the lesson and fill in their student notebooks using Binder.
\n", "\"Binder\n", "```\n", "\n", "This week we will continue developing our skills using [pandas](https://pandas.pydata.org/) to process real data. \n", "\n", "## Motivation\n", "\n", "![Finland April 2019](img/Finland-April-2019.png)\n", "*Source: [https://weather.com/news/climate/news/2019-05-20-april-2019-global-temperatures-nasa-noaa](https://weather.com/news/climate/news/2019-05-20-april-2019-global-temperatures-nasa-noaa)*\n", "\n", "April 2019 was the [second warmest April on record globally](https://weather.com/news/climate/news/2019-05-20-april-2019-global-temperatures-nasa-noaa), and the warmest on record at 13 weather stations in Finland. \n", "In this lesson, we will use our data manipulation and analysis skills to analyze weather data, and investigate the claim that April 2019 was the warmest on record across Finland.\n", "\n", "Along the way we will cover a number of useful techniques in pandas including:\n", "\n", "- renaming columns\n", "- iterating data frame rows and applying functions\n", "- data aggregation\n", "- repeating the analysis task for several input files" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Input data\n", "\n", "In the lesson this week we are using weather observation data from Finland [downloaded from NOAA](https://www7.ncdc.noaa.gov/CDO/cdopoemain.cmd?datasetabbv=DS3505&countryabbv=&georegionabbv=&resolution=40). You will be working with data from either 15 or 4 different weather observation stations from Finland, depending on your environment.\n", "\n", "## Downloading the data\n", "\n", "The first step for today's lesson is to get the data. Which data files you download will depend on the platform you're using for working through the lesson. We recommend using the command line tool [wget](https://www.gnu.org/software/wget/) for downloading the data. wget is already installed in the cloud computing environments." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CSC Notebooks users\n", "\n", "```{attention}\n", "We suggest using the Geo-Python Lite blueprint for this lesson if you would like to use data from all 15 weather observation stations.\n", "```\n", "\n", "First, you need to open a new terminal window in Jupyter Lab (from **File** -> **New** -> **Terminal**). Once the terminal window is open, you will need to navigate to the L6 directory:\n", "\n", "```bash\n", "cd notebooks/L6/\n", "```\n", "\n", "You can confirm that you are located in the correct directory by listing the contents of the current directory:\n", "\n", "```bash\n", "ls\n", "```\n", "\n", "You should see something like the following output:\n", "\n", "```bash\n", "advanced-data-processing-with-pandas.ipynb errors.ipynb img\n", "debugging.ipynb gcp-5-assertions.ipynb\n", "```\n", "\n", "If so, you're in the correct directory.\n", "\n", "#### Downloading the data (Geo-Python Lite blueprint)\n", "\n", "If you are using the Geo-Python Lite blueprint you can download the full dataset using wget:\n", "\n", "```bash\n", "wget https://davewhipp.github.io/data/Finland-weather-data-full.tar.gz\n", "```\n", "\n", "After the download completes, you can extract the data files usign tar:\n", "\n", "```bash\n", "tar zxvf Finland-weather-data-full.tar.gz\n", "```\n", "\n", "At this stage you should have a new directory called `data` that contains the data for this week's lesson. You can confirm this by listing the contents of the data-folder:\n", "\n", "```bash\n", "ls data\n", "```\n", "\n", "You should see something like the following:\n", "\n", "```bash\n", "028360.txt 029070.txt 029440.txt 029740.txt 6367598020644inv.txt\n", "028690.txt 029110.txt 029500.txt 029810.txt 6367598020644stn.txt\n", "028750.txt 029170.txt 029700.txt 029820.txt\n", "028970.txt 029350.txt 029720.txt 3505doc.txt\n", "```\n", "\n", "Now you should be all set to proceed with the lesson!\n", "\n", "#### Downloading the data (regular Geo-Python blueprint)\n", "\n", "If you are using the regular Geo-Python blueprint you can download a partial dataset using wget:\n", "\n", "```bash\n", "wget https://davewhipp.github.io/data/Finland-weather-data-CSC.tar.gz\n", "```\n", "\n", "After the download completes, you can extract the data files using tar:\n", "\n", "```bash\n", "tar zxvf Finland-weather-data-CSC.tar.gz\n", "```\n", "\n", "At this stage you should have a new directory called `data` that contains the input data for this week's lesson. You can confirm this by listing the contents of the data-folder:\n", "\n", "```bash\n", "ls data\n", "```\n", "\n", "You should see something like the following:\n", "\n", "```bash\n", "029440.txt 029720.txt 3505doc.txt 6367598020644stn.txt\n", "029700.txt 029740.txt 6367598020644inv.txt\n", "```\n", "\n", "Now you should be all set to proceed with the lesson!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Students using Jupyter on their personal computers\n", "\n", "If you working on your own computer, you need to pay attention to the filepaths. First, you need to open a new terminal window in Jupyter Lab (from **File** -> **New** -> **Terminal**). Once the terminal window is open, you will need to navigate to the L6 directory:\n", "\n", "```bash\n", "cd path/to/L6/\n", "```\n", "\n", "where `path/to/` should be replaced with the correct path for the Lesson 6 materials on your computer. Once in the correct directory, you can confirm this by typing:\n", "\n", "```bash\n", "ls\n", "```\n", "\n", "You should see something like the following output:\n", "\n", "```bash\n", "advanced-data-processing-with-pandas.ipynb errors.ipynb img\n", "debugging.ipynb gcp-5-assertions.ipynb\n", "```\n", "\n", "Next, you can download the data files using wget:\n", "\n", "```bash\n", "wget https://davewhipp.github.io/data/Finland-weather-data-full.tar.gz\n", "```\n", "\n", "After the download completes, you can extract the data files usign tar:\n", "\n", "```bash\n", "tar zxvf Finland-weather-data-full.tar.gz\n", "```\n", "\n", "At this stage you should have a new directory called `data` that contains the data for this week's lesson. You can confirm this by listing the contents of the data-folder:\n", "\n", "```bash\n", "ls data\n", "```\n", "\n", "You should see something like the following:\n", "\n", "```bash\n", "028360.txt 029070.txt 029440.txt 029740.txt 6367598020644inv.txt\n", "028690.txt 029110.txt 029500.txt 029810.txt 6367598020644stn.txt\n", "028750.txt 029170.txt 029700.txt 029820.txt\n", "028970.txt 029350.txt 029720.txt 3505doc.txt\n", "```\n", "\n", "Now you should be all set to proceed with the lesson!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Binder users\n", "\n", "It is not recommended to use Binder for this lesson.\n", "\n", "## About the data\n", "\n", "As part of the download there are a number of files that describe the weather data. These *metadata* files include:\n", "\n", "- A list of stations\\*: [data/6367598020644stn.txt](metadata/6367598020644stn.txt)\n", "- Details about weather observations at each station: [data/6367598020644inv.txt](metadata/6367598020644inv.txt)\n", "- A data description (i.e., column names): [data/3505doc.txt](metadata/3505doc.txt)\n", "\n", "\\*Note that the list of stations is for all 15 stations, even if you're working with only the partial dataset of 4 stations.\n", "\n", "The input data for this week are separated with varying number of spaces (i.e., fixed width). The first lines and columns of the data look like following:\n", "\n", "``` \n", " USAF WBAN YR--MODAHRMN DIR SPD GUS CLG SKC L M H VSB MW MW MW MW AW AW AW AW W TEMP DEWP SLP ALT STP MAX MIN PCP01 PCP06 PCP24 PCPXX SD\n", "029440 99999 190601010600 090 7 *** *** OVC * * * 0.0 ** ** ** ** ** ** ** ** * 27 **** 1011.0 ***** ****** *** *** ***** ***** ***** ***** ** \n", "029440 99999 190601011300 *** 0 *** *** OVC * * * 0.0 ** ** ** ** ** ** ** ** * 27 **** 1015.5 ***** ****** *** *** ***** ***** ***** ***** ** \n", "029440 99999 190601012000 *** 0 *** *** OVC * * * 0.0 ** ** ** ** ** ** ** ** * 25 **** 1016.2 ***** ****** *** *** ***** ***** ***** ***** ** \n", "029440 99999 190601020600 *** 0 *** *** CLR * * * 0.0 ** ** ** ** ** ** ** ** * 26 **** 1016.2 ***** ****** *** *** ***** ***** ***** ***** **\n", "```\n", "\n", "We will develop our analysis workflow using data for a single station. Then, we will repeat the same process for all the stations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reading the data\n", "\n", "In order to get started, let's first import pandas: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At this point, we can already have a quick look at the data file `029440.txt` for Tampere Pirkkala and how it is structured. We can notice at least two things we need to consider when reading in the data:\n", "\n", "```{admonition} Input data structure\n", "- **Delimiter:** The data are **separated with a varying amount of spaces**. If you check out the documentation for the [read_csv() method](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html), you can see that there are two different ways of doing this. We can use either `sep='\\s+'` or `delim_whitespace=True` (but not both at the same time). In this case, we prefer to use `delim_whitespace` parameter.\n", "\n", "- **No Data values:** No data values in the NOAA data are coded with varying number of `*`. We can tell pandas to consider those characters as NaNs by specifying `na_values=['*', '**', '***', '****', '*****', '******']`.\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define relative path to the file\n", "fp = r\"data/029440.txt\"\n", "\n", "# Read data using varying amount of spaces as separator and specifying * characters as NoData values\n", "data = pd.read_csv(\n", " fp, delim_whitespace=True, na_values=[\"*\", \"**\", \"***\", \"****\", \"*****\", \"******\"]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see how the data looks by printing the first five rows with the `head()` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All seems ok. However, we won't be needing all of the 33 columns for detecting warm temperatures in April. We can check all column names by running `data.columns`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "data.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A description for all these columns is available in the metadata file [data/3505doc.txt](metadata/3505doc.txt). \n", "\n", "### Reading in the data once again\n", "\n", "This time, we will read in only some of the columns using the `usecols` parameter. Let's read in columns that might be somehow useful to our analysis, or at least that contain some values that are meaningful to us, including the station name, timestamp, and data about wind and temperature: `'USAF','YR--MODAHRMN', 'DIR', 'SPD', 'GUS','TEMP', 'MAX', 'MIN'`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Read in only selected columns\n", "data = pd.read_csv(\n", " fp,\n", " delim_whitespace=True,\n", " usecols=[\"USAF\", \"YR--MODAHRMN\", \"DIR\", \"SPD\", \"GUS\", \"TEMP\", \"MAX\", \"MIN\"],\n", " na_values=[\"*\", \"**\", \"***\", \"****\", \"*****\", \"******\"],\n", ")\n", "\n", "# Check the dataframe\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Okay so we can see that the data was successfully read to the DataFrame and we also seemed to be able to convert the asterisk (\\*) characters into `NaN` values. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Renaming columns\n", "\n", "As we saw above some of the column names are a bit awkward and difficult to interpret. Luckily, it is easy to alter labels in a pandas DataFrame using the [rename](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html) function. In order to change the column names, we need to tell pandas how we want to rename the columns using a dictionary that lists old and new column names\n", "\n", "Let's first check again the current column names in our DataFrame:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.columns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{admonition} Dictionaries\n", "A [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) is a specific data structure in Python for storing key-value pairs. During this course, we will use dictionaries mainly when renaming columns in a pandas series, but dictionaries are useful for many different purposes! For more information about Python dictionaries, check out [this tutorial](https://realpython.com/python-dicts/).\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can define the new column names using a [dictionary](https://www.tutorialspoint.com/python/python_dictionary.htm) where we list \"`key: value`\" pairs, in which the original column name (the one which will be replaced) is the key and the new column name is the value.\n", "\n", "- Let's change the following:\n", " \n", " - `YR--MODAHRMN` to `TIME`\n", " - `SPD` to `SPEED`\n", " - `GUS` to `GUST`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Create the dictionary with old and new names\n", "new_names = {\"YR--MODAHRMN\": \"TIME\", \"SPD\": \"SPEED\", \"GUS\": \"GUST\"}\n", "\n", "# Let's see what the variable new_names look like\n", "new_names" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check the data type of the new_names variable\n", "type(new_names)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From above we can see that we have successfully created a new dictionary. \n", "\n", "Now we can change the column names by passing that dictionary using the parameter `columns` in the `rename()` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Rename the columns\n", "data = data.rename(columns=new_names)\n", "\n", "# Print the new columns\n", "print(data.columns)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Perfect, now our column names are easier to understand and use. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check your understanding\n", "\n", "The temperature values in our data files are again in Fahrenheit. As you might guess, we will soon convert these temperatures in to Celsius. In order to avoid confusion with the columns, let's rename the column `TEMP` to `TEMP_F`. Let's also rename `USAF` to `STATION_NUMBER`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "# Solution\n", "# Create the dictionary with old and new names\n", "new_names = {\"USAF\": \"STATION_NUMBER\", \"TEMP\": \"TEMP_F\"}\n", "\n", "# Rename the columns\n", "data = data.rename(columns=new_names)\n", "\n", "# Check the output\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data properties\n", "\n", "As we learned last week, it's always a good idea to check basic properties of the input data before proceeding with the data analysis. Let's check the:\n", "\n", "- Number of rows and columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Top and bottom rows" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.tail()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Data types of the columns" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.dtypes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- Descriptive statistics" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "data.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here we can see that there are varying number of observations per column (look at the `count` row above), because some of the columns have missing values." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Using your own functions in pandas \n", "\n", "Now it's again time to convert temperatures from Fahrenheit to Celsius! Yes, we have already done this many times before, but this time we will learn how to apply our own functions to data in a pandas DataFrame.\n", "\n", "**We will define a function for the temperature conversion, and apply this function for each Celsius value on each row of the DataFrame. Output celsius values will be stored in a new column called** `TEMP_C`.\n", "\n", "We will first see how we can apply the function row-by-row using a `for` loop and then we will learn how to apply the method to all rows more efficiently all at once.\n", "\n", "### Defining the function\n", "\n", "For both of these approaches, we first need to define our temperature conversion function from Fahrenheit to Celsius:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def fahr_to_celsius(temp_fahrenheit):\n", " \"\"\"Function to convert Fahrenheit temperature into Celsius.\n", "\n", " Parameters\n", " ----------\n", "\n", " temp_fahrenheit: int | float\n", " Input temperature in Fahrenheit (should be a number)\n", "\n", " Returns\n", " -------\n", "\n", " Temperature in Celsius (float)\n", " \"\"\"\n", "\n", " # Convert the Fahrenheit into Celsius\n", " converted_temp = (temp_fahrenheit - 32) / 1.8\n", "\n", " return converted_temp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's test the function with some known value:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fahr_to_celsius(32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also print out the first rows of our data frame to see our input data before further processing: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Iterating over rows\n", "\n", "We can apply the function one row at a time using a `for` loop and the [iterrows()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html) method. In other words, we can use the `iterrows()` method and a `for` loop to repeat a process *for each row in a pandas DataFrame*. Please note that iterating over rows is a rather inefficient approach, but it is still useful to understand the logic behind the iteration.\n", "\n", "When using the `iterrows()` method it is important to understand that `iterrows()` accesses not only the values of one row, but also the `index` of the row as well. \n", "\n", "Let's start with a simple for loop that goes through each row in our DataFrame.\n", "\n", "```{note}\n", "We use single quotes to select the column `TEMP_F` of the row in the example below. This is because using double quotes would result in a `SyntaxError` since Python would interpret this as the end of the string for the `print()` function.\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Iterate over the rows\n", "for idx, row in data.iterrows():\n", "\n", " # Print the index value\n", " print(f\"Index: {idx}\")\n", "\n", " # Print the row\n", " print(f\"Temp F: {row['TEMP_F']}\\n\")\n", "\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{admonition} Breaking a loop\n", "When developing a for loop, you don't always need to go through the entire loop if you just want to test things out. \n", "The [break](https://www.tutorialspoint.com/python/python_break_statement.htm) statement in Python terminates the current loop whereever it is placed and we used it here just to test check out the values on the first row.\n", "With a large data file or dataset, you might not want to print out thousands of values to the screen!\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the `idx` variable indeed contains the index value at position 0 (the first row) and the `row` variable contains all the data from that given row stored as a pandas `Series`.\n", "\n", "Let's now create an empty column `TEMP_C` for the Celsius temperatures and update the values in that column using the `fahr_to_celsius` function we defined earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an empty float column for the output values\n", "data[\"TEMP_C\"] = 0.0\n", "\n", "# Iterate over the rows\n", "for idx, row in data.iterrows():\n", "\n", " # Convert the Fahrenheit to Celsius\n", " celsius = fahr_to_celsius(row[\"TEMP_F\"])\n", "\n", " # Update the value of 'Celsius' column with the converted value\n", " data.at[idx, \"TEMP_C\"] = celsius" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{admonition} Reminder: .at or .loc?\n", "Here, you could also use `data.loc[idx, new_column] = celsius` to achieve the same result. \n", " \n", "If you only need to access a single value in a DataFrame, [DataFrame.at](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.at.html) is faster compared to [DataFrame.loc](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html), which is designed for accessing groups of rows and columns. \n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's see how our DataFrame looks like now after the calculations above." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Applying the function\n", "\n", "pandas DataFrames and Series have a dedicated method `.apply()` for applying functions on columns (or rows!). When using `.apply()`, we pass the function name (without parentheses!) as an argument to the `apply()` method. Let's start by applying the function to the `TEMP_F` column that contains the temperature values in Fahrenheit." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"TEMP_F\"].apply(fahr_to_celsius)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results look logical and we can store them permanently into a new column (overwriting the old values): " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"TEMP_C\"] = data[\"TEMP_F\"].apply(fahr_to_celsius)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also apply the function on several columns at once. Furthermore, we can re-order the dataframe at the same time." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[[\"TEMP_F\", \"MIN\", \"MAX\"]].apply(fahr_to_celsius)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check your understanding\n", "\n", "Convert `'TEMP_F'`, `'MIN'`, `'MAX'` to Celsius by applying the function like we did above and store the outputs to new columns `'TEMP_C'`, `'MIN_C'`, `'MAX_C'`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "# Solution\n", "data[[\"TEMP_C\", \"MIN_C\", \"MAX_C\"]] = data[[\"TEMP_F\", \"MIN\", \"MAX\"]].apply(\n", " fahr_to_celsius\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Applying the function on all columns `data.apply(fahr_to_celsius)` would not give an error in our case, but the results also don't make much sense for columns where input data was other than Fahrenheit temperatures." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You might also notice that our conversion function would also allow us to \n", "pass one column or the entire dataframe as a parameter. For example, like this: `fahr_to_celsius(data[\"TEMP_F\"])`. However, the code is perhaps easier to follow when using the apply method." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check the output:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "data.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{admonition} Should I use .iterrows() or .apply()?\n", "We are teaching the `.iterrows()` method because it helps to understand the structure of a DataFrame and the process of looping through DataFrame rows. However, using `.apply()` is often more efficient in terms of execution time. \n", "\n", "At this point, the most important thing is that you understand what happens when you are modifying the values in a pandas DataFrame. When doing the course exercises, either of these approaches is ok!\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Parsing dates\n", "\n", "We will eventually want to group our data based on month in order to see if April temperatures in 2019 were higher than average. Currently, the date and time information is stored in the column `TIME` (which was originally titled `YR--MODAHRMN`:\n", "\n", "`YR--MODAHRMN = YEAR-MONTH-DAY-HOUR-MINUTE IN GREENWICH MEAN TIME (GMT)`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's have a closer look at the date and time information we have by checking the values in that column, and their data type:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"TIME\"].head(10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"TIME\"].tail(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `TIME` column contains several observations per day (and even several observations per hour). The timestamp for the first observation is `190601010600`, i.e. from 1st of January 1906 (way back!), and the timestamp for the latest observation is `201910012350`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"TIME\"].dtypes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The information is stored as integer values.\n", "\n", "We want to **aggregate the data on a monthly level**, and in order to do so we need to \"label\" each row of data based on the month when the record was observed. In order to do this, we need to somehow separate information about the year and month for each row.\n", "\n", "We create these \"labels\" by making a new column (or an index) containing information about the month (including the year, but excluding day, hours, and minutes).\n", "\n", "Before further taking that step, we should first convert the contents in the `TIME` column to character strings for convenience." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Convert to string\n", "data[\"TIME_STR\"] = data[\"TIME\"].astype(str)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### String slicing\n", "\n", "Now that we have converted the date and time information into character strings, we next need to \"cut\" the needed information from the [string objects](https://docs.python.org/3/tutorial/introduction.html#strings). If we look at the latest time stamp in the data (`201910012350`), you can see that there is a systematic pattern `YEAR-MONTH-DAY-HOUR-MINUTE`. Four first characters represent the year, and six first characters are year + month!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "date = \"201910012350\"\n", "date[0:6]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Based on this information, we can slice the correct range of characters from the `TIME_STR` column using [pandas.Series.str.slice()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.slice.html)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SLice the string\n", "data[\"YEAR_MONTH\"] = data[\"TIME_STR\"].str.slice(start=0, stop=6)\n", "\n", "# Let's see what we have\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Nice! Now we have \"labeled\" the rows based on information about date and time, but only including the year and month in the labels." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check your understanding\n", "\n", "Create a new column `'MONTH'` with information about the month without the year." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "hide-cell" ] }, "outputs": [], "source": [ "# Solution\n", "# Extract information about month from the TIME_STR column into a new column 'MONTH':\n", "data[\"MONTH\"] = data[\"TIME_STR\"].str.slice(start=4, stop=6)\n", "\n", "# Check the result\n", "data[[\"YEAR_MONTH\", \"MONTH\"]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Datetime (optional for Lesson 6)\n", "\n", "In pandas, we can convert dates and times into a new data type [datetime](https://docs.python.org/3.7/library/datetime.html) using [pandas.to_datetime](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) function." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Convert character strings to datetime\n", "data[\"DATE\"] = pd.to_datetime(data[\"TIME_STR\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check the output\n", "data[\"DATE\"].head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{admonition} Pandas Series datetime properties\n", "There are several methods available for accessing information about the properties of datetime values. Read more from the pandas documentation about [datetime properties](https://pandas.pydata.org/pandas-docs/stable/reference/series.html#datetime-properties).\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can extract different time units based on the datetime-column using the [pandas.Series.dt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.html) accessor:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"DATE\"].dt.year" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"DATE\"].dt.month" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also combine the datetime functionalities with other methods from pandas. For example, we can check the number of unique years in our input data: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"DATE\"].dt.year.nunique()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the final analysis, we need combined information of the year and month. One way to achieve this is to use the `format` parameter to define the output datetime format according to [strftime(format)](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior) method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Convert to datetime and keep only year and month\n", "data[\"YEAR_MONTH_DT\"] = pd.to_datetime(data[\"TIME_STR\"], format=\"%Y%m\", exact=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`exact=False` finds the characters matching the specified format and drops out the rest (days, hours and minutes are excluded in the output)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"YEAR_MONTH_DT\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we have a unique label for each month as a datetime object." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Aggregating data in Pandas by grouping\n", "\n", "Here, we will learn how to use [pandas.DataFrame.groupby](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html) which is a handy method for compressing large amounts of data and computing statistics for subgroups.\n", "\n", "We will use the groupby method to calculate the average temperatures for each month through these main steps:\n", "\n", " 1. **Grouping the data** based on the year and month\n", " 2. Calculating the average for each month (each group) \n", " 3. Storing those values into **a new DataFrame** called `monthly_data`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we start grouping the data, let's once more see what our input data looks like." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"number of rows: {len(data)}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have quite a few rows of weather data, and several observations per day. Our goal is to create an aggreated data frame that would have only one row per month." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's **group** our data based on the unique year and month combinations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grouped = data.groupby(\"YEAR_MONTH\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "````{note}\n", "It is also possible to create combinations of years and months on-the-fly when grouping the data:\n", " \n", "```\n", "# Group the data \n", "grouped = data.groupby(['YEAR', 'MONTH'])\n", "```\n", "````" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore the new variable `grouped`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(grouped)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "len(grouped)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have a new object with type `DataFrameGroupBy` with 82 groups. In order to understand what just happened, let's also check the number of unique year and month combinations in our data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data[\"YEAR_MONTH\"].nunique()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Length of the grouped object should be the same as the number of unique values in the column we used for grouping. For each unique value, there is a group of data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore our grouped data even further. \n", "\n", "We can check the \"names\" of each group." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Next line will print out all 82 group \"keys\"\n", "# grouped.groups.keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Accessing data for one group\n", "\n", "Let us now check the contents for the group representing August 2019 (the name of that group is `(2019, 4)` if you grouped the data based on datetime columns `YEAR` and `MONTH`). We can get the values of that hour from the grouped object using the `get_group()` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Specify a month (as character string)\n", "month = \"190601\"\n", "\n", "# Select the group\n", "group1 = grouped.get_group(month)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's see what we have\n", "group1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ahaa! As we can see, a single group contains a **DataFrame** with values only for that specific month and year. Let's check the DataType of this group." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "type(group1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, as noted above, one group is a pandas DataFrame! This is really useful, because we can now use all the familiar DataFrame methods for calculating statistics, etc. for this specific group. We can, for example, calculate the average values for all variables using the statistical functions that we have seen already (e.g. mean, std, min, max, median, etc.).\n", "\n", "We can do that by using the `mean()` function that we already did during Lesson 5. \n", "\n", "- Let's calculate the mean for following attributes all at once:\n", "\n", " - `DIR`\n", " - `SPEED`\n", " - `GUST`\n", " - `TEMP`\n", " - `TEMP_C`\n", " - `MONTH`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Specify the columns that will be part of the calculation\n", "mean_cols = [\"DIR\", \"SPEED\", \"GUST\", \"TEMP_F\", \"TEMP_C\"]\n", "\n", "# Calculate the mean values all at one go\n", "mean_values = group1[mean_cols].mean()\n", "\n", "# Let's see what we have\n", "print(mean_values)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above, we saw how you can access data from a single group. In order to get information about all groups (all months) we can use a `for` loop or methods available in the grouped object.\n", "\n", "### For loops and grouped objects\n", "\n", "When iterating over the groups in our `DataFrameGroupBy` object it is important to understand that a single group in our `DataFrameGroupBy` actually contains not only the actual values, but also information about the `key` that was used to do the grouping. Hence, when iterating over the data we need to assign the `key` and the values into separate variables.\n", "\n", "So, let's see how we can iterate over the groups and print the key and the data from a single group (again using `break` to only see what is happening for the first group)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Iterate over groups\n", "for key, group in grouped:\n", " # Print key and group\n", " print(f\"Key:\\n {key}\")\n", " print(f\"\\nFirst rows of data in this group:\\n {group.head()}\")\n", "\n", " # Stop iteration with break command\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "OK, so from here we can see that the `key` contains the name of the group (year, month).\n", "\n", "Let's build on this and see how we can create a DataFrame where we calculate the mean values for all those weather attributes that we were interested in. We will repeat some of the earlier steps here so you can see and better understand what is happening." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create an empty DataFrame for the aggregated values\n", "monthly_data = pd.DataFrame()\n", "\n", "# The columns that we want to aggregate\n", "mean_cols = [\"DIR\", \"SPEED\", \"GUST\", \"TEMP_F\", \"TEMP_C\"]\n", "\n", "# Iterate over the groups\n", "for key, group in grouped:\n", "\n", " # Calculate mean\n", " mean_values = group[mean_cols].mean()\n", "\n", " # Add the ´key´ (i.e. the date+time information) into the aggregated values\n", " mean_values[\"YEAR_MONTH\"] = key\n", "\n", " # Append the aggregated values into the DataFrame\n", " monthly_data = monthly_data.append(mean_values, ignore_index=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, let us see what we have." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "print(monthly_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Awesome! Now we have aggregated our data and we have a new DataFrame called `monthly_data` where we have mean values for each month in the data set." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding the mean for all groups at once\n", "\n", "We can also achieve the same result by computing the mean of all columns for all groups in the grouped object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grouped.mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Detecting warm months\n", "\n", "Now that we have aggregated our data on monthly level, all we need to do is to sort our results in order to check which years had the warmest April temperatures. A simple approach is to select all Aprils from the data, group the data and check which group(s) have the highest mean value.\n", "\n", "We can start this by selecting all records that are from April (regardless of the year)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aprils = data[data[\"MONTH\"] == \"04\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we can take a subset of columns that might contain interesting information." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "aprils = aprils[[\"STATION_NUMBER\", \"TEMP_F\", \"TEMP_C\", \"YEAR_MONTH\"]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can group by year and month." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grouped = aprils.groupby(by=\"YEAR_MONTH\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And then we can calculate the mean for each group." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "monthly_mean = grouped.mean()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "monthly_mean.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can sort and check the highest temperature values. We can sort the data frame in a descending order to do this." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "monthly_mean.sort_values(by=\"TEMP_C\", ascending=False).head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, how did April 2019 rank at the Tampere Pirkkala observation station? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Repeating the data analysis with a larger dataset\n", "\n", "To wrap up today's lesson, let's repeat the data analysis steps above for all the available data we have (!!). First, it would be good to confirm the path to the **folder** where all the input data are located.\n", "\n", "The idea is, that we will repeat the analysis process for each input file using a (rather long) for loop! Here we have all the main analysis steps with some additional output info, all in one long code cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Read selected columns of data using varying amount of spaces as separator and specifying * characters as NoData values\n", "data = pd.read_csv(\n", " fp,\n", " delim_whitespace=True,\n", " usecols=[\"USAF\", \"YR--MODAHRMN\", \"DIR\", \"SPD\", \"GUS\", \"TEMP\", \"MAX\", \"MIN\"],\n", " na_values=[\"*\", \"**\", \"***\", \"****\", \"*****\", \"******\"],\n", ")\n", "\n", "# Rename the columns\n", "new_names = {\n", " \"USAF\": \"STATION_NUMBER\",\n", " \"YR--MODAHRMN\": \"TIME\",\n", " \"SPD\": \"SPEED\",\n", " \"GUS\": \"GUST\",\n", " \"TEMP\": \"TEMP_F\",\n", "}\n", "data = data.rename(columns=new_names)\n", "\n", "# Print info about the current input file:\n", "print(f\"STATION NUMBER: {data.at[0, 'STATION_NUMBER']}\")\n", "print(f\"NUMBER OF OBSERVATIONS: {len(data)}\")\n", "\n", "# Create column\n", "col_name = \"TEMP_C\"\n", "data[col_name] = None\n", "\n", "# Convert tempetarues from Fahrenheits to Celsius\n", "data[\"TEMP_C\"] = data[\"TEMP_F\"].apply(fahr_to_celsius)\n", "\n", "# Convert TIME to string\n", "data[\"TIME_STR\"] = data[\"TIME\"].astype(str)\n", "\n", "# Parse year and month\n", "data[\"MONTH\"] = data[\"TIME_STR\"].str.slice(start=5, stop=6).astype(int)\n", "data[\"YEAR\"] = data[\"TIME_STR\"].str.slice(start=0, stop=4).astype(int)\n", "\n", "# Extract observations for the months of April\n", "aprils = data[data[\"MONTH\"] == 4]\n", "\n", "# Take a subset of columns\n", "aprils = aprils[[\"STATION_NUMBER\", \"TEMP_F\", \"TEMP_C\", \"YEAR\", \"MONTH\"]]\n", "\n", "# Group by year and month\n", "grouped = aprils.groupby(by=[\"YEAR\", \"MONTH\"])\n", "\n", "# Get mean values for each group\n", "monthly_mean = grouped.mean()\n", "\n", "# Print info\n", "print(monthly_mean.sort_values(by=\"TEMP_C\", ascending=False).head(5))\n", "print(\"\\n\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(data.at[0, \"STATION_NUMBER\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "At this point we will use the `glob()` function from the module `glob` to list our input files. glob is a handy function for finding files in a directrory that match a given pattern, for example." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import glob" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "file_list = glob.glob(r\"data/0*txt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{note}\n", "Note that we're using the \\* character as a wildcard, so any file that starts with `data/0` and ends with `txt` will be added to the list of files we will iterate over. We specifically use `data/0` as the starting part of the file names to avoid having our metadata files included in the list!\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(f\"Number of files in the list: {len(file_list)}\")\n", "print(file_list)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, you should have all the relevant file names in a list, and we can loop over the list using a for loop." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for fp in file_list:\n", " print(fp)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [], "source": [ "# Repeat the analysis steps for each input file:\n", "for fp in file_list:\n", "\n", " # Read selected columns of data using varying amount of spaces as separator and specifying * characters as NoData values\n", " data = pd.read_csv(\n", " fp,\n", " delim_whitespace=True,\n", " usecols=[\"USAF\", \"YR--MODAHRMN\", \"DIR\", \"SPD\", \"GUS\", \"TEMP\", \"MAX\", \"MIN\"],\n", " na_values=[\"*\", \"**\", \"***\", \"****\", \"*****\", \"******\"],\n", " )\n", "\n", " # Rename the columns\n", " new_names = {\n", " \"USAF\": \"STATION_NUMBER\",\n", " \"YR--MODAHRMN\": \"TIME\",\n", " \"SPD\": \"SPEED\",\n", " \"GUS\": \"GUST\",\n", " \"TEMP\": \"TEMP_F\",\n", " }\n", " data = data.rename(columns=new_names)\n", "\n", " # Print info about the current input file:\n", " print(f\"STATION NUMBER: {data.at[0, 'STATION_NUMBER']}\")\n", " print(f\"NUMBER OF OBSERVATIONS: {len(data)}\")\n", "\n", " # Create column\n", " col_name = \"TEMP_C\"\n", " data[col_name] = None\n", "\n", " # Convert tempetarues from Fahrenheits to Celsius\n", " data[\"TEMP_C\"] = data[\"TEMP_F\"].apply(fahr_to_celsius)\n", "\n", " # Convert TIME to string\n", " data[\"TIME_STR\"] = data[\"TIME\"].astype(str)\n", "\n", " # Parse year and month\n", " data[\"MONTH\"] = data[\"TIME_STR\"].str.slice(start=5, stop=6).astype(int)\n", " data[\"YEAR\"] = data[\"TIME_STR\"].str.slice(start=0, stop=4).astype(int)\n", "\n", " # Extract observations for the months of April\n", " aprils = data[data[\"MONTH\"] == 4]\n", "\n", " # Take a subset of columns\n", " aprils = aprils[[\"STATION_NUMBER\", \"TEMP_F\", \"TEMP_C\", \"YEAR\", \"MONTH\"]]\n", "\n", " # Group by year and month\n", " grouped = aprils.groupby(by=[\"YEAR\", \"MONTH\"])\n", "\n", " # Get mean values for each group\n", " monthly_mean = grouped.mean()\n", "\n", " # Print info\n", " print(monthly_mean.sort_values(by=\"TEMP_C\", ascending=False).head(5))\n", " print(\"\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, what can we conclude about how warm April 2019 was in Finland? Was it actually the warmest April on record? If so, in which stations?" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 4 }