You can run this notebook in a live session Binder or view it on Github.

64b81c7d6fb84bda8385984496a9fb52

Bag: Parallel Lists for semi-structured data

Dask-bag excels in processing data that can be represented as a sequence of arbitrary inputs. We’ll refer to this as “messy” data, because it can contain complex nested structures, missing fields, mixtures of data types, etc. The functional programming style fits very nicely with standard Python iteration, such as can be found in the itertools module.

Messy data is often encountered at the beginning of data processing pipelines when large volumes of raw data are first consumed. The initial set of data might be JSON, CSV, XML, or any other format that does not enforce strict structure and datatypes. For this reason, the initial data massaging and processing is often done with Python lists, dicts, and sets.

These core data structures are optimized for general-purpose storage and processing. Adding streaming computation with iterators/generator expressions or libraries like itertools or `toolz <https://toolz.readthedocs.io/en/latest/>`__ let us process large volumes in a small space. If we combine this with parallel processing then we can churn through a fair amount of data.

Dask.bag is a high level Dask collection to automate common workloads of this form. In a nutshell

dask.bag = map, filter, toolz + parallel execution

Related Documentation

Create data

[1]:
%run prep.py -d accounts

Setup

Again, we’ll use the distributed scheduler. Schedulers will be explained in depth later.

[2]:
from dask.distributed import Client

client = Client(n_workers=4)

Creation

You can create a Bag from a Python sequence, from files, from data on S3, etc. We demonstrate using .take() to show elements of the data. (Doing .take(1) results in a tuple with one element)

Note that the data are partitioned into blocks, and there are many items per block. In the first example, the two partitions contain five elements each, and in the following two, each file is partitioned into one or more bytes blocks.

[3]:
# each element is an integer
import dask.bag as db
b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], npartitions=2)
b.take(3)
[3]:
(1, 2, 3)
[4]:
# each element is a text file, where each line is a JSON object
# note that the compression is handled automatically
import os
b = db.read_text(os.path.join('data', 'accounts.*.json.gz'))
b.take(1)
[4]:
('{"id": 0, "name": "Jerry", "transactions": [{"transaction-id": 304, "amount": -1254}, {"transaction-id": 832, "amount": -1459}, {"transaction-id": 1291, "amount": -686}, {"transaction-id": 1428, "amount": -836}, {"transaction-id": 1597, "amount": -860}, {"transaction-id": 1638, "amount": -1167}, {"transaction-id": 1968, "amount": -747}, {"transaction-id": 2686, "amount": -1166}, {"transaction-id": 3429, "amount": -1197}, {"transaction-id": 3769, "amount": -832}, {"transaction-id": 3874, "amount": -426}, {"transaction-id": 3932, "amount": -979}, {"transaction-id": 3976, "amount": -973}, {"transaction-id": 4119, "amount": -937}, {"transaction-id": 4162, "amount": -966}, {"transaction-id": 4468, "amount": -630}, {"transaction-id": 4594, "amount": -956}, {"transaction-id": 4651, "amount": -717}, {"transaction-id": 4675, "amount": -1010}, {"transaction-id": 4965, "amount": -967}, {"transaction-id": 5035, "amount": -568}, {"transaction-id": 5121, "amount": -854}, {"transaction-id": 5339, "amount": -1087}, {"transaction-id": 5363, "amount": -728}, {"transaction-id": 6125, "amount": -994}, {"transaction-id": 6191, "amount": -944}, {"transaction-id": 6337, "amount": -952}, {"transaction-id": 7157, "amount": -1262}, {"transaction-id": 7470, "amount": -804}, {"transaction-id": 8555, "amount": -651}, {"transaction-id": 8924, "amount": -688}, {"transaction-id": 9063, "amount": -896}, {"transaction-id": 9683, "amount": -911}, {"transaction-id": 9792, "amount": -761}, {"transaction-id": 9975, "amount": -969}]}\n',)
[5]:
# Edit sources.py to configure source locations
import sources
sources.bag_url
[5]:
's3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv'
[6]:
# Requires `s3fs` library
# each partition is a remote CSV text file
b = db.read_text(sources.bag_url,
                 storage_options={'anon': True})
b.take(1)
[6]:
('VendorID,tpep_pickup_datetime,tpep_dropoff_datetime,passenger_count,trip_distance,pickup_longitude,pickup_latitude,RateCodeID,store_and_fwd_flag,dropoff_longitude,dropoff_latitude,payment_type,fare_amount,extra,mta_tax,tip_amount,tolls_amount,improvement_surcharge,total_amount\n',)

Manipulation

Bag objects hold the standard functional API found in projects like the Python standard library, toolz, or pyspark, including map, filter, groupby, etc..

Operations on Bag objects create new bags. Call the .compute() method to trigger execution, as we saw for Delayed objects.

[7]:
def is_even(n):
    return n % 2 == 0

b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
c = b.filter(is_even).map(lambda x: x ** 2)
c
[7]:
dask.bag<lambda, npartitions=10>
[8]:
# blocking form: wait for completion (which is very fast in this case)
c.compute()
[8]:
[4, 16, 36, 64, 100]

Example: Accounts JSON data

We’ve created a fake dataset of gzipped JSON data in your data directory. This is like the example used in the DataFrame example we will see later, except that it has bundled up all of the entires for each individual id into a single record. This is similar to data that you might collect off of a document store database or a web API.

Each line is a JSON encoded dictionary with the following keys

  • id: Unique identifier of the customer

  • name: Name of the customer

  • transactions: List of transaction-id, amount pairs, one for each transaction for the customer in that file

[9]:
filename = os.path.join('data', 'accounts.*.json.gz')
lines = db.read_text(filename)
lines.take(3)
[9]:
('{"id": 0, "name": "Jerry", "transactions": [{"transaction-id": 304, "amount": -1254}, {"transaction-id": 832, "amount": -1459}, {"transaction-id": 1291, "amount": -686}, {"transaction-id": 1428, "amount": -836}, {"transaction-id": 1597, "amount": -860}, {"transaction-id": 1638, "amount": -1167}, {"transaction-id": 1968, "amount": -747}, {"transaction-id": 2686, "amount": -1166}, {"transaction-id": 3429, "amount": -1197}, {"transaction-id": 3769, "amount": -832}, {"transaction-id": 3874, "amount": -426}, {"transaction-id": 3932, "amount": -979}, {"transaction-id": 3976, "amount": -973}, {"transaction-id": 4119, "amount": -937}, {"transaction-id": 4162, "amount": -966}, {"transaction-id": 4468, "amount": -630}, {"transaction-id": 4594, "amount": -956}, {"transaction-id": 4651, "amount": -717}, {"transaction-id": 4675, "amount": -1010}, {"transaction-id": 4965, "amount": -967}, {"transaction-id": 5035, "amount": -568}, {"transaction-id": 5121, "amount": -854}, {"transaction-id": 5339, "amount": -1087}, {"transaction-id": 5363, "amount": -728}, {"transaction-id": 6125, "amount": -994}, {"transaction-id": 6191, "amount": -944}, {"transaction-id": 6337, "amount": -952}, {"transaction-id": 7157, "amount": -1262}, {"transaction-id": 7470, "amount": -804}, {"transaction-id": 8555, "amount": -651}, {"transaction-id": 8924, "amount": -688}, {"transaction-id": 9063, "amount": -896}, {"transaction-id": 9683, "amount": -911}, {"transaction-id": 9792, "amount": -761}, {"transaction-id": 9975, "amount": -969}]}\n',
 '{"id": 1, "name": "Bob", "transactions": [{"transaction-id": 131, "amount": 2}, {"transaction-id": 142, "amount": 2}, {"transaction-id": 431, "amount": 2}, {"transaction-id": 1139, "amount": 2}, {"transaction-id": 1565, "amount": 2}, {"transaction-id": 1591, "amount": 2}, {"transaction-id": 2170, "amount": 2}, {"transaction-id": 2179, "amount": 2}, {"transaction-id": 2455, "amount": 2}, {"transaction-id": 2560, "amount": 2}, {"transaction-id": 3310, "amount": 2}, {"transaction-id": 3503, "amount": 2}, {"transaction-id": 3642, "amount": 2}, {"transaction-id": 3987, "amount": 2}, {"transaction-id": 4191, "amount": 2}, {"transaction-id": 4298, "amount": 2}, {"transaction-id": 4392, "amount": 2}, {"transaction-id": 4775, "amount": 2}, {"transaction-id": 5795, "amount": 2}, {"transaction-id": 6184, "amount": 2}, {"transaction-id": 7211, "amount": 2}, {"transaction-id": 7873, "amount": 2}, {"transaction-id": 8158, "amount": 2}, {"transaction-id": 8681, "amount": 2}, {"transaction-id": 8865, "amount": 2}, {"transaction-id": 9562, "amount": 2}, {"transaction-id": 9932, "amount": 2}]}\n',
 '{"id": 2, "name": "George", "transactions": [{"transaction-id": 86, "amount": 2593}, {"transaction-id": 987, "amount": 2466}, {"transaction-id": 1503, "amount": 2478}, {"transaction-id": 1520, "amount": 2512}, {"transaction-id": 3065, "amount": 2631}, {"transaction-id": 3223, "amount": 2506}, {"transaction-id": 4608, "amount": 2422}, {"transaction-id": 6603, "amount": 2727}, {"transaction-id": 6768, "amount": 2364}, {"transaction-id": 6903, "amount": 2361}, {"transaction-id": 6971, "amount": 2701}, {"transaction-id": 7136, "amount": 2529}, {"transaction-id": 8353, "amount": 2616}, {"transaction-id": 9667, "amount": 2804}]}\n')

Our data comes out of the file as lines of text. Notice that file decompression happened automatically. We can make this data look more reasonable by mapping the json.loads function onto our bag.

[10]:
import json
js = lines.map(json.loads)
# take: inspect first few elements
js.take(3)
[10]:
({'id': 0,
  'name': 'Jerry',
  'transactions': [{'transaction-id': 304, 'amount': -1254},
   {'transaction-id': 832, 'amount': -1459},
   {'transaction-id': 1291, 'amount': -686},
   {'transaction-id': 1428, 'amount': -836},
   {'transaction-id': 1597, 'amount': -860},
   {'transaction-id': 1638, 'amount': -1167},
   {'transaction-id': 1968, 'amount': -747},
   {'transaction-id': 2686, 'amount': -1166},
   {'transaction-id': 3429, 'amount': -1197},
   {'transaction-id': 3769, 'amount': -832},
   {'transaction-id': 3874, 'amount': -426},
   {'transaction-id': 3932, 'amount': -979},
   {'transaction-id': 3976, 'amount': -973},
   {'transaction-id': 4119, 'amount': -937},
   {'transaction-id': 4162, 'amount': -966},
   {'transaction-id': 4468, 'amount': -630},
   {'transaction-id': 4594, 'amount': -956},
   {'transaction-id': 4651, 'amount': -717},
   {'transaction-id': 4675, 'amount': -1010},
   {'transaction-id': 4965, 'amount': -967},
   {'transaction-id': 5035, 'amount': -568},
   {'transaction-id': 5121, 'amount': -854},
   {'transaction-id': 5339, 'amount': -1087},
   {'transaction-id': 5363, 'amount': -728},
   {'transaction-id': 6125, 'amount': -994},
   {'transaction-id': 6191, 'amount': -944},
   {'transaction-id': 6337, 'amount': -952},
   {'transaction-id': 7157, 'amount': -1262},
   {'transaction-id': 7470, 'amount': -804},
   {'transaction-id': 8555, 'amount': -651},
   {'transaction-id': 8924, 'amount': -688},
   {'transaction-id': 9063, 'amount': -896},
   {'transaction-id': 9683, 'amount': -911},
   {'transaction-id': 9792, 'amount': -761},
   {'transaction-id': 9975, 'amount': -969}]},
 {'id': 1,
  'name': 'Bob',
  'transactions': [{'transaction-id': 131, 'amount': 2},
   {'transaction-id': 142, 'amount': 2},
   {'transaction-id': 431, 'amount': 2},
   {'transaction-id': 1139, 'amount': 2},
   {'transaction-id': 1565, 'amount': 2},
   {'transaction-id': 1591, 'amount': 2},
   {'transaction-id': 2170, 'amount': 2},
   {'transaction-id': 2179, 'amount': 2},
   {'transaction-id': 2455, 'amount': 2},
   {'transaction-id': 2560, 'amount': 2},
   {'transaction-id': 3310, 'amount': 2},
   {'transaction-id': 3503, 'amount': 2},
   {'transaction-id': 3642, 'amount': 2},
   {'transaction-id': 3987, 'amount': 2},
   {'transaction-id': 4191, 'amount': 2},
   {'transaction-id': 4298, 'amount': 2},
   {'transaction-id': 4392, 'amount': 2},
   {'transaction-id': 4775, 'amount': 2},
   {'transaction-id': 5795, 'amount': 2},
   {'transaction-id': 6184, 'amount': 2},
   {'transaction-id': 7211, 'amount': 2},
   {'transaction-id': 7873, 'amount': 2},
   {'transaction-id': 8158, 'amount': 2},
   {'transaction-id': 8681, 'amount': 2},
   {'transaction-id': 8865, 'amount': 2},
   {'transaction-id': 9562, 'amount': 2},
   {'transaction-id': 9932, 'amount': 2}]},
 {'id': 2,
  'name': 'George',
  'transactions': [{'transaction-id': 86, 'amount': 2593},
   {'transaction-id': 987, 'amount': 2466},
   {'transaction-id': 1503, 'amount': 2478},
   {'transaction-id': 1520, 'amount': 2512},
   {'transaction-id': 3065, 'amount': 2631},
   {'transaction-id': 3223, 'amount': 2506},
   {'transaction-id': 4608, 'amount': 2422},
   {'transaction-id': 6603, 'amount': 2727},
   {'transaction-id': 6768, 'amount': 2364},
   {'transaction-id': 6903, 'amount': 2361},
   {'transaction-id': 6971, 'amount': 2701},
   {'transaction-id': 7136, 'amount': 2529},
   {'transaction-id': 8353, 'amount': 2616},
   {'transaction-id': 9667, 'amount': 2804}]})

Basic Queries

Once we parse our JSON data into proper Python objects (dicts, lists, etc.) we can perform more interesting queries by creating small Python functions to run on our data.

[11]:
# filter: keep only some elements of the sequence
js.filter(lambda record: record['name'] == 'Alice').take(5)
[11]:
({'id': 29,
  'name': 'Alice',
  'transactions': [{'transaction-id': 29, 'amount': 212},
   {'transaction-id': 58, 'amount': 203},
   {'transaction-id': 311, 'amount': 235},
   {'transaction-id': 459, 'amount': 224},
   {'transaction-id': 751, 'amount': 187},
   {'transaction-id': 875, 'amount': 219},
   {'transaction-id': 1210, 'amount': 198},
   {'transaction-id': 1264, 'amount': 264},
   {'transaction-id': 1356, 'amount': 258},
   {'transaction-id': 1382, 'amount': 191},
   {'transaction-id': 1579, 'amount': 190},
   {'transaction-id': 1756, 'amount': 244},
   {'transaction-id': 2240, 'amount': 260},
   {'transaction-id': 2289, 'amount': 226},
   {'transaction-id': 2374, 'amount': 261},
   {'transaction-id': 2550, 'amount': 189},
   {'transaction-id': 2726, 'amount': 277},
   {'transaction-id': 2792, 'amount': 218},
   {'transaction-id': 3613, 'amount': 255},
   {'transaction-id': 4252, 'amount': 215},
   {'transaction-id': 4874, 'amount': 268},
   {'transaction-id': 5083, 'amount': 190},
   {'transaction-id': 5962, 'amount': 241},
   {'transaction-id': 6059, 'amount': 244},
   {'transaction-id': 6290, 'amount': 255},
   {'transaction-id': 6307, 'amount': 239},
   {'transaction-id': 6632, 'amount': 199},
   {'transaction-id': 6919, 'amount': 231},
   {'transaction-id': 7274, 'amount': 192},
   {'transaction-id': 7412, 'amount': 239},
   {'transaction-id': 7700, 'amount': 237},
   {'transaction-id': 7786, 'amount': 207},
   {'transaction-id': 8165, 'amount': 254},
   {'transaction-id': 8413, 'amount': 238},
   {'transaction-id': 8567, 'amount': 182},
   {'transaction-id': 9018, 'amount': 252},
   {'transaction-id': 9097, 'amount': 198},
   {'transaction-id': 9440, 'amount': 234},
   {'transaction-id': 9502, 'amount': 194},
   {'transaction-id': 9610, 'amount': 190},
   {'transaction-id': 9635, 'amount': 268},
   {'transaction-id': 9968, 'amount': 210}]},
 {'id': 53,
  'name': 'Alice',
  'transactions': [{'transaction-id': 1178, 'amount': 169},
   {'transaction-id': 3808, 'amount': 48},
   {'transaction-id': 4280, 'amount': 310},
   {'transaction-id': 6527, 'amount': -10}]},
 {'id': 55,
  'name': 'Alice',
  'transactions': [{'transaction-id': 108, 'amount': -501},
   {'transaction-id': 113, 'amount': -725},
   {'transaction-id': 118, 'amount': -442},
   {'transaction-id': 130, 'amount': -444},
   {'transaction-id': 158, 'amount': -520},
   {'transaction-id': 170, 'amount': -409},
   {'transaction-id': 365, 'amount': -408},
   {'transaction-id': 444, 'amount': -676},
   {'transaction-id': 492, 'amount': -683},
   {'transaction-id': 605, 'amount': -296},
   {'transaction-id': 712, 'amount': -292},
   {'transaction-id': 936, 'amount': -601},
   {'transaction-id': 1029, 'amount': -346},
   {'transaction-id': 1336, 'amount': -715},
   {'transaction-id': 1625, 'amount': -444},
   {'transaction-id': 1718, 'amount': -572},
   {'transaction-id': 1878, 'amount': -595},
   {'transaction-id': 1960, 'amount': -342},
   {'transaction-id': 1983, 'amount': -488},
   {'transaction-id': 2022, 'amount': -665},
   {'transaction-id': 2073, 'amount': -558},
   {'transaction-id': 2088, 'amount': -529},
   {'transaction-id': 2178, 'amount': -492},
   {'transaction-id': 2190, 'amount': -632},
   {'transaction-id': 2453, 'amount': -589},
   {'transaction-id': 2524, 'amount': -545},
   {'transaction-id': 2696, 'amount': -706},
   {'transaction-id': 2761, 'amount': -518},
   {'transaction-id': 2861, 'amount': -237},
   {'transaction-id': 2991, 'amount': -554},
   {'transaction-id': 3225, 'amount': -339},
   {'transaction-id': 3284, 'amount': -502},
   {'transaction-id': 3357, 'amount': -456},
   {'transaction-id': 3417, 'amount': -454},
   {'transaction-id': 3467, 'amount': -542},
   {'transaction-id': 3473, 'amount': -480},
   {'transaction-id': 3557, 'amount': -590},
   {'transaction-id': 3563, 'amount': -444},
   {'transaction-id': 3746, 'amount': -709},
   {'transaction-id': 3855, 'amount': -720},
   {'transaction-id': 3947, 'amount': -719},
   {'transaction-id': 3957, 'amount': -763},
   {'transaction-id': 4015, 'amount': -336},
   {'transaction-id': 4054, 'amount': -386},
   {'transaction-id': 4138, 'amount': -519},
   {'transaction-id': 4202, 'amount': -686},
   {'transaction-id': 4240, 'amount': -622},
   {'transaction-id': 4386, 'amount': -355},
   {'transaction-id': 4405, 'amount': -674},
   {'transaction-id': 4449, 'amount': -454},
   {'transaction-id': 4531, 'amount': -380},
   {'transaction-id': 4557, 'amount': -758},
   {'transaction-id': 4568, 'amount': -592},
   {'transaction-id': 4584, 'amount': -362},
   {'transaction-id': 4822, 'amount': -265},
   {'transaction-id': 4869, 'amount': -587},
   {'transaction-id': 5292, 'amount': -570},
   {'transaction-id': 5374, 'amount': -266},
   {'transaction-id': 5441, 'amount': -438},
   {'transaction-id': 5471, 'amount': -362},
   {'transaction-id': 5603, 'amount': -587},
   {'transaction-id': 5744, 'amount': -608},
   {'transaction-id': 5770, 'amount': -351},
   {'transaction-id': 5778, 'amount': -646},
   {'transaction-id': 5861, 'amount': -362},
   {'transaction-id': 6000, 'amount': -512},
   {'transaction-id': 6091, 'amount': -558},
   {'transaction-id': 6117, 'amount': -379},
   {'transaction-id': 6185, 'amount': -368},
   {'transaction-id': 6258, 'amount': -514},
   {'transaction-id': 6306, 'amount': -516},
   {'transaction-id': 6335, 'amount': -507},
   {'transaction-id': 6346, 'amount': -617},
   {'transaction-id': 6498, 'amount': -578},
   {'transaction-id': 6726, 'amount': -860},
   {'transaction-id': 6854, 'amount': -723},
   {'transaction-id': 6922, 'amount': -481},
   {'transaction-id': 6925, 'amount': -634},
   {'transaction-id': 6934, 'amount': -401},
   {'transaction-id': 6976, 'amount': -470},
   {'transaction-id': 6984, 'amount': -470},
   {'transaction-id': 7020, 'amount': -225},
   {'transaction-id': 7071, 'amount': -962},
   {'transaction-id': 7181, 'amount': -722},
   {'transaction-id': 7215, 'amount': -689},
   {'transaction-id': 7320, 'amount': -683},
   {'transaction-id': 7366, 'amount': -337},
   {'transaction-id': 7577, 'amount': -437},
   {'transaction-id': 7583, 'amount': -496},
   {'transaction-id': 7585, 'amount': -580},
   {'transaction-id': 7606, 'amount': -452},
   {'transaction-id': 7653, 'amount': -700},
   {'transaction-id': 7746, 'amount': -497},
   {'transaction-id': 7761, 'amount': -452},
   {'transaction-id': 7880, 'amount': -499},
   {'transaction-id': 8094, 'amount': -891},
   {'transaction-id': 8183, 'amount': -332},
   {'transaction-id': 8223, 'amount': -406},
   {'transaction-id': 8225, 'amount': -344},
   {'transaction-id': 8226, 'amount': -414},
   {'transaction-id': 8327, 'amount': -414},
   {'transaction-id': 8496, 'amount': -601},
   {'transaction-id': 8499, 'amount': -729},
   {'transaction-id': 8516, 'amount': -567},
   {'transaction-id': 8689, 'amount': -412},
   {'transaction-id': 8879, 'amount': -625},
   {'transaction-id': 8974, 'amount': -802},
   {'transaction-id': 9005, 'amount': -492},
   {'transaction-id': 9177, 'amount': -541},
   {'transaction-id': 9182, 'amount': -614},
   {'transaction-id': 9269, 'amount': -477},
   {'transaction-id': 9273, 'amount': -780},
   {'transaction-id': 9399, 'amount': -657},
   {'transaction-id': 9488, 'amount': -455},
   {'transaction-id': 9551, 'amount': -34},
   {'transaction-id': 9604, 'amount': -609},
   {'transaction-id': 9645, 'amount': -356},
   {'transaction-id': 9753, 'amount': -516},
   {'transaction-id': 9947, 'amount': -610},
   {'transaction-id': 9961, 'amount': -743}]},
 {'id': 63,
  'name': 'Alice',
  'transactions': [{'transaction-id': 616, 'amount': 976},
   {'transaction-id': 887, 'amount': 901},
   {'transaction-id': 1049, 'amount': 1004},
   {'transaction-id': 1462, 'amount': 939},
   {'transaction-id': 1616, 'amount': 948},
   {'transaction-id': 2499, 'amount': 1466},
   {'transaction-id': 3183, 'amount': 912},
   {'transaction-id': 4247, 'amount': 1193},
   {'transaction-id': 4595, 'amount': 1038},
   {'transaction-id': 5047, 'amount': 935},
   {'transaction-id': 7529, 'amount': 1134},
   {'transaction-id': 7829, 'amount': 979},
   {'transaction-id': 8063, 'amount': 766},
   {'transaction-id': 8515, 'amount': 932},
   {'transaction-id': 9571, 'amount': 1112},
   {'transaction-id': 9887, 'amount': 1087}]},
 {'id': 64,
  'name': 'Alice',
  'transactions': [{'transaction-id': 2766, 'amount': 604},
   {'transaction-id': 7903, 'amount': 541}]})
[12]:
def count_transactions(d):
    return {'name': d['name'], 'count': len(d['transactions'])}

# map: apply a function to each element
(js.filter(lambda record: record['name'] == 'Alice')
   .map(count_transactions)
   .take(5))
[12]:
({'name': 'Alice', 'count': 42},
 {'name': 'Alice', 'count': 4},
 {'name': 'Alice', 'count': 120},
 {'name': 'Alice', 'count': 16},
 {'name': 'Alice', 'count': 2})
[13]:
# pluck: select a field, as from a dictionary, element[field]
(js.filter(lambda record: record['name'] == 'Alice')
   .map(count_transactions)
   .pluck('count')
   .take(5))
[13]:
(42, 4, 120, 16, 2)
[14]:
# Average number of transactions for all of the Alice entries
(js.filter(lambda record: record['name'] == 'Alice')
   .map(count_transactions)
   .pluck('count')
   .mean()
   .compute())
[14]:
35.44331395348837

Use flatten to de-nest

In the example below we see the use of .flatten() to flatten results. We compute the average amount for all transactions for all Alices.

[15]:
js.filter(lambda record: record['name'] == 'Alice').pluck('transactions').take(3)
[15]:
([{'transaction-id': 29, 'amount': 212},
  {'transaction-id': 58, 'amount': 203},
  {'transaction-id': 311, 'amount': 235},
  {'transaction-id': 459, 'amount': 224},
  {'transaction-id': 751, 'amount': 187},
  {'transaction-id': 875, 'amount': 219},
  {'transaction-id': 1210, 'amount': 198},
  {'transaction-id': 1264, 'amount': 264},
  {'transaction-id': 1356, 'amount': 258},
  {'transaction-id': 1382, 'amount': 191},
  {'transaction-id': 1579, 'amount': 190},
  {'transaction-id': 1756, 'amount': 244},
  {'transaction-id': 2240, 'amount': 260},
  {'transaction-id': 2289, 'amount': 226},
  {'transaction-id': 2374, 'amount': 261},
  {'transaction-id': 2550, 'amount': 189},
  {'transaction-id': 2726, 'amount': 277},
  {'transaction-id': 2792, 'amount': 218},
  {'transaction-id': 3613, 'amount': 255},
  {'transaction-id': 4252, 'amount': 215},
  {'transaction-id': 4874, 'amount': 268},
  {'transaction-id': 5083, 'amount': 190},
  {'transaction-id': 5962, 'amount': 241},
  {'transaction-id': 6059, 'amount': 244},
  {'transaction-id': 6290, 'amount': 255},
  {'transaction-id': 6307, 'amount': 239},
  {'transaction-id': 6632, 'amount': 199},
  {'transaction-id': 6919, 'amount': 231},
  {'transaction-id': 7274, 'amount': 192},
  {'transaction-id': 7412, 'amount': 239},
  {'transaction-id': 7700, 'amount': 237},
  {'transaction-id': 7786, 'amount': 207},
  {'transaction-id': 8165, 'amount': 254},
  {'transaction-id': 8413, 'amount': 238},
  {'transaction-id': 8567, 'amount': 182},
  {'transaction-id': 9018, 'amount': 252},
  {'transaction-id': 9097, 'amount': 198},
  {'transaction-id': 9440, 'amount': 234},
  {'transaction-id': 9502, 'amount': 194},
  {'transaction-id': 9610, 'amount': 190},
  {'transaction-id': 9635, 'amount': 268},
  {'transaction-id': 9968, 'amount': 210}],
 [{'transaction-id': 1178, 'amount': 169},
  {'transaction-id': 3808, 'amount': 48},
  {'transaction-id': 4280, 'amount': 310},
  {'transaction-id': 6527, 'amount': -10}],
 [{'transaction-id': 108, 'amount': -501},
  {'transaction-id': 113, 'amount': -725},
  {'transaction-id': 118, 'amount': -442},
  {'transaction-id': 130, 'amount': -444},
  {'transaction-id': 158, 'amount': -520},
  {'transaction-id': 170, 'amount': -409},
  {'transaction-id': 365, 'amount': -408},
  {'transaction-id': 444, 'amount': -676},
  {'transaction-id': 492, 'amount': -683},
  {'transaction-id': 605, 'amount': -296},
  {'transaction-id': 712, 'amount': -292},
  {'transaction-id': 936, 'amount': -601},
  {'transaction-id': 1029, 'amount': -346},
  {'transaction-id': 1336, 'amount': -715},
  {'transaction-id': 1625, 'amount': -444},
  {'transaction-id': 1718, 'amount': -572},
  {'transaction-id': 1878, 'amount': -595},
  {'transaction-id': 1960, 'amount': -342},
  {'transaction-id': 1983, 'amount': -488},
  {'transaction-id': 2022, 'amount': -665},
  {'transaction-id': 2073, 'amount': -558},
  {'transaction-id': 2088, 'amount': -529},
  {'transaction-id': 2178, 'amount': -492},
  {'transaction-id': 2190, 'amount': -632},
  {'transaction-id': 2453, 'amount': -589},
  {'transaction-id': 2524, 'amount': -545},
  {'transaction-id': 2696, 'amount': -706},
  {'transaction-id': 2761, 'amount': -518},
  {'transaction-id': 2861, 'amount': -237},
  {'transaction-id': 2991, 'amount': -554},
  {'transaction-id': 3225, 'amount': -339},
  {'transaction-id': 3284, 'amount': -502},
  {'transaction-id': 3357, 'amount': -456},
  {'transaction-id': 3417, 'amount': -454},
  {'transaction-id': 3467, 'amount': -542},
  {'transaction-id': 3473, 'amount': -480},
  {'transaction-id': 3557, 'amount': -590},
  {'transaction-id': 3563, 'amount': -444},
  {'transaction-id': 3746, 'amount': -709},
  {'transaction-id': 3855, 'amount': -720},
  {'transaction-id': 3947, 'amount': -719},
  {'transaction-id': 3957, 'amount': -763},
  {'transaction-id': 4015, 'amount': -336},
  {'transaction-id': 4054, 'amount': -386},
  {'transaction-id': 4138, 'amount': -519},
  {'transaction-id': 4202, 'amount': -686},
  {'transaction-id': 4240, 'amount': -622},
  {'transaction-id': 4386, 'amount': -355},
  {'transaction-id': 4405, 'amount': -674},
  {'transaction-id': 4449, 'amount': -454},
  {'transaction-id': 4531, 'amount': -380},
  {'transaction-id': 4557, 'amount': -758},
  {'transaction-id': 4568, 'amount': -592},
  {'transaction-id': 4584, 'amount': -362},
  {'transaction-id': 4822, 'amount': -265},
  {'transaction-id': 4869, 'amount': -587},
  {'transaction-id': 5292, 'amount': -570},
  {'transaction-id': 5374, 'amount': -266},
  {'transaction-id': 5441, 'amount': -438},
  {'transaction-id': 5471, 'amount': -362},
  {'transaction-id': 5603, 'amount': -587},
  {'transaction-id': 5744, 'amount': -608},
  {'transaction-id': 5770, 'amount': -351},
  {'transaction-id': 5778, 'amount': -646},
  {'transaction-id': 5861, 'amount': -362},
  {'transaction-id': 6000, 'amount': -512},
  {'transaction-id': 6091, 'amount': -558},
  {'transaction-id': 6117, 'amount': -379},
  {'transaction-id': 6185, 'amount': -368},
  {'transaction-id': 6258, 'amount': -514},
  {'transaction-id': 6306, 'amount': -516},
  {'transaction-id': 6335, 'amount': -507},
  {'transaction-id': 6346, 'amount': -617},
  {'transaction-id': 6498, 'amount': -578},
  {'transaction-id': 6726, 'amount': -860},
  {'transaction-id': 6854, 'amount': -723},
  {'transaction-id': 6922, 'amount': -481},
  {'transaction-id': 6925, 'amount': -634},
  {'transaction-id': 6934, 'amount': -401},
  {'transaction-id': 6976, 'amount': -470},
  {'transaction-id': 6984, 'amount': -470},
  {'transaction-id': 7020, 'amount': -225},
  {'transaction-id': 7071, 'amount': -962},
  {'transaction-id': 7181, 'amount': -722},
  {'transaction-id': 7215, 'amount': -689},
  {'transaction-id': 7320, 'amount': -683},
  {'transaction-id': 7366, 'amount': -337},
  {'transaction-id': 7577, 'amount': -437},
  {'transaction-id': 7583, 'amount': -496},
  {'transaction-id': 7585, 'amount': -580},
  {'transaction-id': 7606, 'amount': -452},
  {'transaction-id': 7653, 'amount': -700},
  {'transaction-id': 7746, 'amount': -497},
  {'transaction-id': 7761, 'amount': -452},
  {'transaction-id': 7880, 'amount': -499},
  {'transaction-id': 8094, 'amount': -891},
  {'transaction-id': 8183, 'amount': -332},
  {'transaction-id': 8223, 'amount': -406},
  {'transaction-id': 8225, 'amount': -344},
  {'transaction-id': 8226, 'amount': -414},
  {'transaction-id': 8327, 'amount': -414},
  {'transaction-id': 8496, 'amount': -601},
  {'transaction-id': 8499, 'amount': -729},
  {'transaction-id': 8516, 'amount': -567},
  {'transaction-id': 8689, 'amount': -412},
  {'transaction-id': 8879, 'amount': -625},
  {'transaction-id': 8974, 'amount': -802},
  {'transaction-id': 9005, 'amount': -492},
  {'transaction-id': 9177, 'amount': -541},
  {'transaction-id': 9182, 'amount': -614},
  {'transaction-id': 9269, 'amount': -477},
  {'transaction-id': 9273, 'amount': -780},
  {'transaction-id': 9399, 'amount': -657},
  {'transaction-id': 9488, 'amount': -455},
  {'transaction-id': 9551, 'amount': -34},
  {'transaction-id': 9604, 'amount': -609},
  {'transaction-id': 9645, 'amount': -356},
  {'transaction-id': 9753, 'amount': -516},
  {'transaction-id': 9947, 'amount': -610},
  {'transaction-id': 9961, 'amount': -743}])
[16]:
(js.filter(lambda record: record['name'] == 'Alice')
   .pluck('transactions')
   .flatten()
   .take(3))
[16]:
({'transaction-id': 29, 'amount': 212},
 {'transaction-id': 58, 'amount': 203},
 {'transaction-id': 311, 'amount': 235})
[17]:
(js.filter(lambda record: record['name'] == 'Alice')
   .pluck('transactions')
   .flatten()
   .pluck('amount')
   .take(3))
[17]:
(212, 203, 235)
[18]:
(js.filter(lambda record: record['name'] == 'Alice')
   .pluck('transactions')
   .flatten()
   .pluck('amount')
   .mean()
   .compute())
[18]:
212.9851137994669

Groupby and Foldby

Often we want to group data by some function or key. We can do this either with the .groupby method, which is straightforward but forces a full shuffle of the data (expensive) or with the harder-to-use but faster .foldby method, which does a streaming combined groupby and reduction.

  • groupby: Shuffles data so that all items with the same key are in the same key-value pair

  • foldby: Walks through the data accumulating a result per key

Note: the full groupby is particularly bad. In actual workloads you would do well to use ``foldby`` or switch to ``DataFrame``s if possible.

groupby

Groupby collects items in your collection so that all items with the same value under some function are collected together into a key-value pair.

[19]:
b = db.from_sequence(['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank'])
b.groupby(len).compute()  # names grouped by length
[19]:
[(7, ['Charlie']), (3, ['Bob', 'Dan']), (5, ['Alice', 'Edith', 'Frank'])]
[20]:
b = db.from_sequence(list(range(10)))
b.groupby(lambda x: x % 2).compute()
[20]:
[(0, [0, 2, 4, 6, 8]), (1, [1, 3, 5, 7, 9])]
[21]:
b.groupby(lambda x: x % 2).starmap(lambda k, v: (k, max(v))).compute()
[21]:
[(0, 8), (1, 9)]

foldby

Foldby can be quite odd at first. It is similar to the following functions from other libraries:

When using foldby you provide

  1. A key function on which to group elements

  2. A binary operator such as you would pass to reduce that you use to perform reduction per each group

  3. A combine binary operator that can combine the results of two reduce calls on different parts of your dataset.

Your reduction must be associative. It will happen in parallel in each of the partitions of your dataset. Then all of these intermediate results will be combined by the combine binary operator.

[22]:
is_even = lambda x: x % 2
b.foldby(is_even, binop=max, combine=max).compute()
[22]:
[(0, 8), (1, 9)]

Example with account data

We find the number of people with the same name.

[23]:
%%time
# Warning, this one takes a while...
result = js.groupby(lambda item: item['name']).starmap(lambda k, v: (k, len(v))).compute()
print(sorted(result))
[('Alice', 152), ('Alice', 166), ('Alice', 178), ('Alice', 192), ('Bob', 143), ('Bob', 156), ('Bob', 169), ('Bob', 182), ('Charlie', 113), ('Charlie', 122), ('Charlie', 131), ('Charlie', 143), ('Dan', 85), ('Dan', 93), ('Dan', 100), ('Dan', 109), ('Edith', 88), ('Edith', 96), ('Edith', 104), ('Edith', 110), ('Frank', 55), ('Frank', 60), ('Frank', 65), ('Frank', 70), ('George', 120), ('George', 132), ('George', 143), ('George', 154), ('Hannah', 117), ('Hannah', 127), ('Hannah', 140), ('Hannah', 151), ('Ingrid', 87), ('Ingrid', 94), ('Ingrid', 99), ('Ingrid', 108), ('Jerry', 141), ('Jerry', 154), ('Jerry', 166), ('Jerry', 181), ('Kevin', 87), ('Kevin', 93), ('Kevin', 103), ('Kevin', 112), ('Laura', 77), ('Laura', 84), ('Laura', 91), ('Laura', 98), ('Michael', 100), ('Michael', 112), ('Michael', 120), ('Michael', 132), ('Norbert', 99), ('Norbert', 108), ('Norbert', 117), ('Norbert', 126), ('Oliver', 130), ('Oliver', 138), ('Oliver', 155), ('Oliver', 165), ('Patricia', 141), ('Patricia', 151), ('Patricia', 167), ('Patricia', 179), ('Quinn', 55), ('Quinn', 60), ('Quinn', 65), ('Quinn', 70), ('Ray', 66), ('Ray', 72), ('Ray', 78), ('Ray', 84), ('Sarah', 110), ('Sarah', 120), ('Sarah', 130), ('Sarah', 140), ('Tim', 117), ('Tim', 126), ('Tim', 136), ('Tim', 145), ('Ursula', 46), ('Ursula', 54), ('Ursula', 55), ('Ursula', 62), ('Victor', 170), ('Victor', 182), ('Victor', 197), ('Victor', 216), ('Wendy', 98), ('Wendy', 126), ('Wendy', 224), ('Xavier', 77), ('Xavier', 84), ('Xavier', 91), ('Xavier', 98), ('Yvonne', 165), ('Yvonne', 180), ('Yvonne', 195), ('Yvonne', 210), ('Zelda', 55), ('Zelda', 60), ('Zelda', 65), ('Zelda', 70)]
CPU times: user 3.89 s, sys: 270 ms, total: 4.16 s
Wall time: 59.2 s
[24]:
%%time
# This one is comparatively fast and produces the same result.
from operator import add
def incr(tot, _):
    return tot+1

result = js.foldby(key='name',
                   binop=incr,
                   initial=0,
                   combine=add,
                   combine_initial=0).compute()
print(sorted(result))
[('Alice', 688), ('Bob', 650), ('Charlie', 509), ('Dan', 387), ('Edith', 398), ('Frank', 250), ('George', 549), ('Hannah', 535), ('Ingrid', 388), ('Jerry', 642), ('Kevin', 395), ('Laura', 350), ('Michael', 464), ('Norbert', 450), ('Oliver', 588), ('Patricia', 638), ('Quinn', 250), ('Ray', 300), ('Sarah', 500), ('Tim', 524), ('Ursula', 217), ('Victor', 765), ('Wendy', 448), ('Xavier', 350), ('Yvonne', 750), ('Zelda', 250)]
CPU times: user 179 ms, sys: 3.04 ms, total: 182 ms
Wall time: 631 ms

Exercise: compute total amount per name

We want to groupby (or foldby) the name key, then add up the all of the amounts for each name.

Steps

  1. Create a small function that, given a dictionary like

    {'name': 'Alice', 'transactions': [{'amount': 1, 'id': 123}, {'amount': 2, 'id': 456}]}
    

    produces the sum of the amounts, e.g. 3

  2. Slightly change the binary operator of the foldby example above so that the binary operator doesn’t count the number of entries, but instead accumulates the sum of the amounts.

[25]:
# Your code here...

DataFrames

For the same reasons that Pandas is often faster than pure Python, dask.dataframe can be faster than dask.bag. We will work more with DataFrames later, but from for the bag point of view, they are frequently the end-point of the “messy” part of data ingestion—once the data can be made into a data-frame, then complex split-apply-combine logic will become much more straight-forward and efficient.

You can transform a bag with a simple tuple or flat dictionary structure into a dask.dataframe with the to_dataframe method.

[26]:
df1 = js.to_dataframe()
df1.head()
[26]:
id name transactions
0 0 Jerry [{'transaction-id': 304, 'amount': -1254}, {'t...
1 1 Bob [{'transaction-id': 131, 'amount': 2}, {'trans...
2 2 George [{'transaction-id': 86, 'amount': 2593}, {'tra...
3 3 Hannah [{'transaction-id': 843, 'amount': 1155}, {'tr...
4 4 Patricia [{'transaction-id': 557, 'amount': 207}, {'tra...

This now looks like a well-defined DataFrame, and we can apply Pandas-like computations to it efficiently.

Using a Dask DataFrame, how long does it take to do our prior computation of numbers of people with the same name? It turns out that dask.dataframe.groupby() beats dask.bag.groupby() more than an order of magnitude; but it still cannot match dask.bag.foldby() for this case.

[27]:
%time df1.groupby('name').id.count().compute().head()
CPU times: user 239 ms, sys: 5.03 ms, total: 244 ms
Wall time: 1.68 s
[27]:
name
Alice      688
Bob        650
Charlie    509
Dan        387
Edith      398
Name: id, dtype: int64

Denormalization

This DataFrame format is less-than-optimal because the transactions column is filled with nested data so Pandas has to revert to object dtype, which is quite slow in Pandas. Ideally we want to transform to a dataframe only after we have flattened our data so that each record is a single int, string, float, etc..

[28]:
def denormalize(record):
    # returns a list for every nested item, each transaction of each person
    return [{'id': record['id'],
             'name': record['name'],
             'amount': transaction['amount'],
             'transaction-id': transaction['transaction-id']}
            for transaction in record['transactions']]

transactions = js.map(denormalize).flatten()
transactions.take(3)
[28]:
({'id': 0, 'name': 'Jerry', 'amount': -1254, 'transaction-id': 304},
 {'id': 0, 'name': 'Jerry', 'amount': -1459, 'transaction-id': 832},
 {'id': 0, 'name': 'Jerry', 'amount': -686, 'transaction-id': 1291})
[29]:
df = transactions.to_dataframe()
df.head()
[29]:
id name amount transaction-id
0 0 Jerry -1254 304
1 0 Jerry -1459 832
2 0 Jerry -686 1291
3 0 Jerry -836 1428
4 0 Jerry -860 1597
[30]:
%%time
# number of transactions per name
# note that the time here includes the data load and ingestion
df.groupby('name')['transaction-id'].count().compute()
CPU times: user 231 ms, sys: 6.06 ms, total: 237 ms
Wall time: 1.5 s
[30]:
name
Alice       24385
Bob         32673
Charlie      9809
Dan         11637
Edith       16750
Frank       12508
George      25176
Hannah      16631
Ingrid      14547
Jerry       26080
Kevin       17305
Laura       30168
Michael     28565
Norbert     18835
Oliver      18438
Patricia    23113
Quinn        6396
Ray         24921
Sarah       22357
Tim         21149
Ursula       1688
Victor      34689
Wendy       17309
Xavier       8378
Yvonne      20648
Zelda       15845
Name: transaction-id, dtype: int64

Limitations

Bags provide very general computation (any Python function.) This generality comes at cost. Bags have the following known limitations

  1. Bag operations tend to be slower than array/dataframe computations in the same way that Python tends to be slower than NumPy/Pandas

  2. Bag.groupby is slow. You should try to use Bag.foldby if possible. Using Bag.foldby requires more thought. Even better, consider creating a normalised dataframe.

Shutdown

[31]:
client.shutdown()