{ "cells": [ { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "\"Dask\n", "\n", "\n", "# Dask DataFrame - parallelized pandas\n", "\n", "Looks and feels like the pandas API, but for parallel and distributed workflows. \n", "\n", "At its core, the `dask.dataframe` module implements a \"blocked parallel\" `DataFrame` object that looks and feels like the pandas API, but for parallel and distributed workflows. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrame`s separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.\n" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "\"Dask\n", "\n", "**Related Documentation**\n", "\n", "* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)\n", "* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)\n", "* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)\n", "* [DataFrame examples](https://examples.dask.org/dataframe.html)\n", "* [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)\n", "\n", "## When to use `dask.dataframe`\n", "\n", "pandas is great for tabular datasets that fit in memory. A general rule of thumb for pandas is:\n", "\n", "> \"Have 5 to 10 times as much RAM as the size of your dataset\"\n", ">\n", "> ~ Wes McKinney (2017) in [10 things I hate about pandas](https://wesmckinney.com/blog/apache-arrow-pandas-internals/)\n", "\n", "Here \"size of dataset\" means dataset size on _the disk_.\n", "\n", "Dask becomes useful when the datasets exceed the above rule.\n", "\n", "In this notebook, you will be working with the New York City Airline data. This dataset is only ~200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets **much** larger than memory.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create datasets" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create the datasets you will be using in this notebook:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%run prep.py -d flights" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up your local cluster" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Create a local Dask cluster and connect it to the client. Don't worry about this bit of code for now, you will learn more in the Distributed notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from dask.distributed import Client\n", "\n", "client = Client(n_workers=4)\n", "client" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Dask Diagnostic Dashboard\n", "\n", "Dask Distributed provides a useful Dashboard to visualize the state of your cluster and computations.\n", "\n", "If you're on **JupyterLab or Binder**, you can use the [Dask JupyterLab extension](https://github.com/dask/dask-labextension) (which should be already installed in your environment) to open the dashboard plots:\n", "* Click on the Dask logo in the left sidebar\n", "* Click on the magnifying glass icon, which will automatically connect to the active dashboard (if that doesn't work, you can type/paste the dashboard link http://127.0.0.1:8787 in the field)\n", "* Click on **\"Task Stream\"**, **\"Progress Bar\"**, and **\"Worker Memory\"**, which will open these plots in new tabs\n", "* Re-organize the tabs to suit your workflow!\n", "\n", "Alternatively, click on the dashboard link displayed in the Client details above: http://127.0.0.1:8787/status. It will open a new browser tab with the Dashboard." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Reading and working with datasets\n", "\n", "Let's read an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import dask" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By convention, we import the module `dask.dataframe` as `dd`, and call the corresponding `DataFrame` object `ddf`.\n", "\n", "**Note**: The term \"Dask DataFrame\" is slightly overloaded. Depending on the context, it can refer to the module or the DataFrame object. To avoid confusion, throughout this notebook:\n", "- `dask.dataframe` (note the all lowercase) refers to the API, and\n", "- `DataFrame` (note the CamelCase) refers to the object.\n", "\n", "The following filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same `DataFrame`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import dask.dataframe as dd\n", "\n", "ddf = dd.read_csv(\n", " os.path.join(\"data\", \"nycflights\", \"*.csv\"), parse_dates={\"Date\": [0, 1, 2]}\n", ")\n", "ddf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dask has not loaded the data yet, it has:\n", "- investigated the input path and found that there are ten matching files\n", "- intelligently created a set of jobs for each chunk -- one per original CSV file in this case" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the representation of the `DataFrame` object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Lazy Evaluation\n", "\n", "Most Dask Collections, including Dask `DataFrame` are evaluated lazily, which means Dask constructs the logic (called task graph) of your computation immediately but \"evaluates\" them only when necessary. You can view this task graph using `.visualize()`.\n", "\n", "You will learn more about this in the Delayed notebook, but for now, note that we need to call `.compute()` to trigger actual computations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ddf.visualize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some functions like `len` and `head` also trigger a computation. Specifically, calling `len` will:\n", "- load actual data, (that is, load each file into a pandas DataFrame)\n", "- then apply the corresponding functions to each pandas DataFrame (also known as a partition)\n", "- combine the subtotals to give you the final grand total" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# load and count number of rows\n", "len(ddf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can view the start and end of the data as you would in pandas:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ddf.head()" ] }, { "cell_type": "markdown", "metadata": { "tags": [ "raises-exception" ] }, "source": [ "```python\n", "ddf.tail()\n", "\n", "# ValueError: Mismatched dtypes found in `pd.read_csv`/`pd.read_table`.\n", "\n", "# +----------------+---------+----------+\n", "# | Column | Found | Expected |\n", "# +----------------+---------+----------+\n", "# | CRSElapsedTime | float64 | int64 |\n", "# | TailNum | object | float64 |\n", "# +----------------+---------+----------+\n", "\n", "# The following columns also raised exceptions on conversion:\n", "\n", "# - TailNum\n", "# ValueError(\"could not convert string to float: 'N54711'\")\n", "\n", "# Usually this is due to dask's dtype inference failing, and\n", "# *may* be fixed by specifying dtypes manually by adding:\n", "\n", "# dtype={'CRSElapsedTime': 'float64',\n", "# 'TailNum': 'object'}\n", "\n", "# to the call to `read_csv`/`read_table`.\n", "\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.\n", "\n", "In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:\n", "\n", "- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.\n", "- Increase the size of the `sample` keyword (in bytes)\n", "- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually `floats` (which do allow missing values). In our particular case this doesn't apply.\n", "\n", "In our case we'll use the first option and directly specify the `dtypes` of the offending columns. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ddf = dd.read_csv(\n", " os.path.join(\"data\", \"nycflights\", \"*.csv\"),\n", " parse_dates={\"Date\": [0, 1, 2]},\n", " dtype={\"TailNum\": str, \"CRSElapsedTime\": float, \"Cancelled\": bool},\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ddf.tail() # now works" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reading from remote storage\n", "\n", "If you're thinking about distributed computing, your data is probably stored remotely on services (like Amazon's S3 or Google's cloud storage) and is in a friendlier format (like Parquet). Dask can read data in various formats directly from these remote locations **lazily** and **in parallel**.\n", "\n", "Here's how you can read the NYC taxi cab data from Amazon S3:\n", "\n", "```python\n", "ddf = dd.read_parquet(\n", " \"s3://nyc-tlc/trip data/yellow_tripdata_2012-*.parquet\",\n", ")\n", "```\n", "\n", "You can also leverage Parquet-specific optimizations like column selection and metadata handling, learn more in [the Dask documentation on working with Parquet files](https://docs.dask.org/en/stable/dataframe-parquet.html)." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Computations with `dask.dataframe`\n", "\n", "Let's compute the maximum of the flight delay.\n", "\n", "With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums.\n", "\n", "```python\n", "import pandas as pd\n", "\n", "files = os.listdir(os.path.join('data', 'nycflights'))\n", "\n", "maxes = []\n", "\n", "for file in files:\n", " df = pd.read_csv(os.path.join('data', 'nycflights', file))\n", " maxes.append(df.DepDelay.max())\n", " \n", "final_max = max(maxes)\n", "```\n", "\n", "`dask.dataframe` lets us write pandas-like code, that operates on larger-than-memory datasets in parallel." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "result = ddf.DepDelay.max()\n", "result.compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This creates the lazy computation for us and then runs it. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note:** Dask will delete intermediate results (like the full pandas DataFrame for each file) as soon as possible. This means you can handle datasets that are larger than memory but, repeated computations will have to load all of the data in each time. (Run the code above again, is it faster or slower than you would expect?)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can view the underlying task graph using `.visualize()`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# notice the parallelism\n", "result.visualize()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises\n", "\n", "In this section you will do a few `dask.dataframe` computations. If you are comfortable with pandas then these should be familiar. You will have to think about when to call `.compute()`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1. How many rows are in our dataset?\n", "\n", "_Hint_: how would you check how many items are in a list?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [] }, "outputs": [], "source": [ "len(ddf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2. In total, how many non-canceled flights were taken?\n", "\n", "_Hint_: use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [] }, "outputs": [], "source": [ "len(ddf[~ddf.Cancelled])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. In total, how many non-canceled flights were taken from each airport?\n", "\n", "*Hint*: use [groupby](https://pandas.pydata.org/pandas-docs/stable/groupby.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [] }, "outputs": [], "source": [ "ddf[~ddf.Cancelled].groupby(\"Origin\").Origin.count().compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4. What was the average departure delay from each airport?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [] }, "outputs": [], "source": [ "ddf.groupby(\"Origin\").DepDelay.mean().compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 5. What day of the week has the worst average departure delay?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [] }, "outputs": [], "source": [ "ddf.groupby(\"DayOfWeek\").DepDelay.mean().idxmax().compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 6. Let's say the distance column is erroneous and you need to add 1 to all values, how would you do this?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "jupyter": { "source_hidden": true }, "tags": [] }, "outputs": [], "source": [ "ddf[\"Distance\"].apply(\n", " lambda x: x + 1\n", ").compute() # don't worry about the warning, we'll discuss in the next sections\n", "\n", "# OR\n", "\n", "(ddf[\"Distance\"] + 1).compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sharing Intermediate Results\n", "\n", "When computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` stores the arguments, allowing duplicate computations to be shared and only computed once.\n", "\n", "For example, let's compute the mean and standard deviation for departure delay of all non-canceled flights. Since Dask operations are lazy, those values aren't the final results yet. They're just the steps required to get the result.\n", "\n", "If you compute them with two calls to compute, there is no sharing of intermediate computations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "non_canceled = ddf[~ddf.Cancelled]\n", "mean_delay = non_canceled.DepDelay.mean()\n", "std_delay = non_canceled.DepDelay.std()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "mean_delay_res = mean_delay.compute()\n", "std_delay_res = std_delay.compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `dask.compute`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But let's try by passing both to a single `compute` call." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "%%time\n", "\n", "mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:\n", "\n", "- the calls to `read_csv`\n", "- the filter (`df[~df.Cancelled]`)\n", "- some of the necessary reductions (`sum`, `count`)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (you might want to use `filename='graph.pdf'` to save the graph to disk so that you can zoom in more easily):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dask.visualize(mean_delay, std_delay, engine=\"cytoscape\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### `.persist()`\n", "\n", "While using a distributed scheduler (you will learn more about schedulers in the upcoming notebooks), you can keep some _data that you want to use often_ in the _distributed memory_. \n", "\n", "`persist` generates \"Futures\" (more on this later as well) and stores them in the same structure as your output. You can use `persist` with any data or computation that fits in memory." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to analyze data only for non-canceled flights departing from JFK airport, you can either have two compute calls like in the previous section:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "non_cancelled = ddf[~ddf.Cancelled]\n", "ddf_jfk = non_cancelled[non_cancelled.Origin == \"JFK\"]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "ddf_jfk.DepDelay.mean().compute()\n", "ddf_jfk.DepDelay.sum().compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or, consider persisting that subset of data in memory.\n", "\n", "See the \"Graph\" dashboard plot, the red squares indicate persisted data stored as Futures in memory. You will also notice an increase in Worker Memory (another dashboard plot) consumption." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ddf_jfk = ddf_jfk.persist() # returns back control immediately" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "ddf_jfk.DepDelay.mean().compute()\n", "ddf_jfk.DepDelay.std().compute()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Analyses on this persisted data is faster because we are not repeating the loading and selecting (non-canceled, JFK departure) operations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Custom code with Dask DataFrame\n", "\n", "`dask.dataframe` only covers a small but well-used portion of the pandas API.\n", "\n", "This limitation is for two reasons:\n", "\n", "1. The Pandas API is *huge*\n", "2. Some operations are genuinely hard to do in parallel, e.g, sorting.\n", "\n", "Additionally, some important operations like `set_index` work, but are slower than in pandas because they include substantial shuffling of data, and may write out to disk.\n", "\n", "**What if you want to use some custom functions that aren't (or can't be) implemented for Dask DataFrame yet?**\n", "\n", "You can open an issue on the [Dask issue tracker](https://github.com/dask/dask/issues) to check how feasible the function could be to implement, and you can consider contributing this function to Dask.\n", "\n", "In case it's a custom function or tricky to implement, `dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:\n", "\n", "- [`map_partitions`](https://docs.dask.org/en/latest/generated/dask.dataframe.DataFrame.map_partitions.html): to run a function on each partition (each pandas DataFrame) of the Dask DataFrame\n", "- [`map_overlap`](https://docs.dask.org/en/latest/generated/dask.dataframe.rolling.map_overlap.html): to run a function on each partition (each pandas DataFrame) of the Dask DataFrame, with some rows shared between neighboring partitions\n", "- [`reduction`](https://docs.dask.org/en/latest/generated/dask.dataframe.Series.reduction.html): for custom row-wise reduction operations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a quick look at the `map_partitions()` function:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "help(ddf.map_partitions)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The \"Distance\" column in `ddf` is currently in miles. Let's say we want to convert the units to kilometers and we have a general helper function as shown below. In this case, we can use `map_partitions` to apply this function across each of the internal pandas `DataFrame`s in parallel. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def my_custom_converter(df, multiplier=1):\n", " return df * multiplier\n", "\n", "\n", "meta = pd.Series(name=\"Distance\", dtype=\"float64\")\n", "\n", "distance_km = ddf.Distance.map_partitions(\n", " my_custom_converter, multiplier=0.6, meta=meta\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "distance_km.visualize()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "distance_km.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What is `meta`?\n", "\n", "Since Dask operates lazily, it doesn't always have enough information to infer the output structure (which includes datatypes) of certain operations.\n", "\n", "`meta` is a _suggestion_ to Dask about the output of your computation. Importantly, `meta` _never infers with the output structure_. Dask uses this `meta` until it can determine the actual output structure.\n", "\n", "Even though there are many ways to define `meta`, we suggest using a small pandas Series or DataFrame that matches the structure of your final output." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Close you local Dask Cluster" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's good practice to always close any Dask cluster you create:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "client.shutdown()" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.5" } }, "nbformat": 4, "nbformat_minor": 4 }