Futures - non-blocking distributed calculations
Contents
You can run this notebook in a live session or view it on Github.
Futures - non-blocking distributed calculations¶
Submit arbitrary functions for computation in a parallelized, eager, and non-blocking way.
The futures
interface (derived from the built-in concurrent.futures
) provide fine-grained real-time execution for custom situations. We can submit individual functions for evaluation with one set of inputs, or evaluated over a sequence of inputs with submit()
and map()
. The call returns immediately, giving one or more futures, whose status begins as “pending” and later becomes “finished”. There is no blocking of the local Python session.
This is the important difference between futures and delayed. Both can be used to support arbitrary task scheduling, but delayed is lazy (it just constructs a graph) whereas futures are eager. With futures, as soon as the inputs are available and there is compute available, the computation starts.
Related Documentation
[1]:
from dask.distributed import Client
client = Client(n_workers=4)
client
[1]:
Client
Client-438c7a7e-168e-11ee-95a8-6045bd777373
Connection method: Cluster object | Cluster type: distributed.LocalCluster |
Dashboard: http://127.0.0.1:8787/status |
Cluster Info
LocalCluster
2169d44d
Dashboard: http://127.0.0.1:8787/status | Workers: 4 |
Total threads: 4 | Total memory: 6.77 GiB |
Status: running | Using processes: True |
Scheduler Info
Scheduler
Scheduler-fd680bfe-39cd-493b-863f-300abd2194b8
Comm: tcp://127.0.0.1:40391 | Workers: 4 |
Dashboard: http://127.0.0.1:8787/status | Total threads: 4 |
Started: Just now | Total memory: 6.77 GiB |
Workers
Worker: 0
Comm: tcp://127.0.0.1:45743 | Total threads: 1 |
Dashboard: http://127.0.0.1:42129/status | Memory: 1.69 GiB |
Nanny: tcp://127.0.0.1:46213 | |
Local directory: /tmp/dask-worker-space/worker-6cwpop4r |
Worker: 1
Comm: tcp://127.0.0.1:36935 | Total threads: 1 |
Dashboard: http://127.0.0.1:41037/status | Memory: 1.69 GiB |
Nanny: tcp://127.0.0.1:45393 | |
Local directory: /tmp/dask-worker-space/worker-40gikf42 |
Worker: 2
Comm: tcp://127.0.0.1:33221 | Total threads: 1 |
Dashboard: http://127.0.0.1:39629/status | Memory: 1.69 GiB |
Nanny: tcp://127.0.0.1:37017 | |
Local directory: /tmp/dask-worker-space/worker-bs0ecu2z |
Worker: 3
Comm: tcp://127.0.0.1:45833 | Total threads: 1 |
Dashboard: http://127.0.0.1:45981/status | Memory: 1.69 GiB |
Nanny: tcp://127.0.0.1:38217 | |
Local directory: /tmp/dask-worker-space/worker-2x_wok1o |
A Typical Workflow¶
This is the same workflow that we saw in the delayed notebook. It is for-loopy and the data is not necessarily an array or a dataframe. The following example outlines a read-transform-write:
def process_file(filename):
data = read_a_file(filename)
data = do_a_transformation(data)
destination = f"results/{filename}"
write_out_data(data, destination)
return destination
futures = []
for filename in filenames:
future = client.submit(process_file, filename)
futures.append(future)
futures
Basics¶
Just like we did in the delayed notebook, let’s make some toy functions, inc
and add
, that sleep for a while to simulate work. We’ll then time running these functions normally.
[2]:
from time import sleep
def inc(x):
sleep(1)
return x + 1
def double(x):
sleep(2)
return 2 * x
def add(x, y):
sleep(1)
return x + y
We can run these locally
[3]:
inc(1)
[3]:
2
Or we can submit them to run remotely with Dask. This immediately returns a future that points to the ongoing computation, and eventually to the stored result.
[4]:
future = client.submit(inc, 1) # returns immediately with pending future
future
[4]:
If you wait a second, and then check on the future again, you’ll see that it has finished.
[5]:
future
[5]:
You can block on the computation and gather the result with the .result()
method.
[6]:
future.result()
[6]:
2
Other ways to wait for a future¶
from dask.distributed import wait, progress
progress(future)
shows a progress bar in this notebook, rather than having to go to the dashboard. This progress bar is also asynchronous, and doesn’t block the execution of other code in the meanwhile.
wait(future)
blocks and forces the notebook to wait until the computation pointed to by future
is done. However, note that if the result of inc()
is sitting in the cluster, it would take no time to execute the computation now, because Dask notices that we are asking for the result of a computation it already knows about. More on this later.
client.compute
¶
Generally, any Dask operation that is executed using .compute()
or dask.compute()
can be submitted for asynchronous execution using client.compute()
instead.
Here is an example from the delayed notebook:
[7]:
import dask
@dask.delayed
def inc(x):
sleep(1)
return x + 1
@dask.delayed
def add(x, y):
sleep(1)
return x + y
x = inc(1)
y = inc(2)
z = add(x, y)
So far we have a regular dask.delayed
output. When we pass z
to client.compute
we get a future back and Dask starts evaluating the task graph.
[8]:
# notice the difference from z.compute()
# notice that this cell completes immediately
future = client.compute(z)
future
[8]:
[9]:
future.result() # waits until result is ready
[9]:
5
When using futures, the computation moves to the data rather than the other way around, and the client, in the local Python session, need never see the intermediate values.
client.submit
¶
client.submit
takes a function and arguments, pushes these to the cluster, returning a Future
representing the result to be computed. The function is passed to a worker process for evaluation. This looks a lot like doing client.compute()
, above, except now we are passing the function and arguments directly to the cluster.
[10]:
def inc(x):
sleep(1)
return x + 1
future_x = client.submit(inc, 1)
future_y = client.submit(inc, 2)
future_z = client.submit(sum, [future_x, future_y])
future_z
[10]:
[11]:
future_z.result() # waits until result is ready
[11]:
5
The arguments toclient.submit
can be regular Python functions and objects, futures from other submit operations or dask.delayed
objects.
Each future represents a result held, or being evaluated by the cluster. Thus we can control caching of intermediate values - when a future is no longer referenced, its value is forgotten. In the solution, above, futures are held for each of the function calls. These results would not need to be re-evaluated if we chose to submit more work that needed them.
We can explicitly pass data from our local session into the cluster using client.scatter()
, but usually it is better to construct functions that do the loading of data within the workers themselves, so that there is no need to serialize and communicate the data. Most of the loading functions within Dask, such as dd.read_csv
, work this way. Similarly, we normally don’t want to gather()
results that are too big in memory.
Example: Sporadically failing task¶
Let’s imagine a task that sometimes fails. You might encounter this when dealing with input data where sometimes a file is malformed, or maybe a request times out.
[12]:
from random import random
def flaky_inc(i):
if random() < 0.2:
raise ValueError("You hit the error!")
return i + 1
If you run this function over and over again, it will sometimes fail.
>>> flaky_inc(2)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [65], in <cell line: 1>()
----> 1 flaky_inc(2)
Input In [61], in flaky_inc(i)
3 def flaky_inc(i):
4 if random() < 0.5:
----> 5 raise ValueError("You hit the error!")
6 return i + 1
ValueError: You hit the error!
We can run this function on a range of inputs using client.map
.
[13]:
futures = client.map(flaky_inc, range(10))
Notice how the cell returned even though some of the computations failed. We can inspect these futures one by one and find the ones that failed:
[14]:
for i, future in enumerate(futures):
print(i, future.status)
0 pending
1 error
2 pending
3 error
4 pending
5 pending
6 pending
7 pending
8 pending
9 pending
2023-06-29 15:04:44,470 - distributed.worker - WARNING - Compute Failed
Key: flaky_inc-f6da6cd306b8486204902a3870a6fdae
Function: flaky_inc
args: (3)
kwargs: {}
Exception: "ValueError('You hit the error!')"
2023-06-29 15:04:44,471 - distributed.worker - WARNING - Compute Failed
Key: flaky_inc-1cecb32b6e65e2400b20e991211b2b2f
Function: flaky_inc
args: (1)
kwargs: {}
Exception: "ValueError('You hit the error!')"
2023-06-29 15:04:44,480 - distributed.worker - WARNING - Compute Failed
Key: flaky_inc-8ddb78078a71f0888743d87b0702a4dd
Function: flaky_inc
args: (0)
kwargs: {}
Exception: "ValueError('You hit the error!')"
2023-06-29 15:04:44,486 - distributed.worker - WARNING - Compute Failed
Key: flaky_inc-075e632c974e750d32844520adb2855a
Function: flaky_inc
args: (2)
kwargs: {}
Exception: "ValueError('You hit the error!')"
You can rerun those specific futures to try to get the task to successfully complete:
[15]:
futures[5].retry()
[16]:
for i, future in enumerate(futures):
print(i, future.status)
0 error
1 error
2 error
3 error
4 finished
5 lost
6 finished
7 finished
8 finished
9 finished
A more concise way of retrying in the case of sporadic failures is by setting the number of retries in the client.compute
, client.submit
or client.map
method.
Note: In this example we also need to set pure=False
to let Dask know that the arguments to the function do not totally determine the output.
[17]:
futures = client.map(flaky_inc, range(10), retries=5, pure=False)
future_z = client.submit(sum, futures)
future_z.result()
2023-06-29 15:04:44,530 - distributed.worker - WARNING - Compute Failed
Key: flaky_inc-e4337c25-d1ab-4a6e-8506-4014bea0c0d8-9
Function: flaky_inc
args: (9)
kwargs: {}
Exception: "ValueError('You hit the error!')"
2023-06-29 15:04:44,531 - distributed.worker - WARNING - Compute Failed
Key: flaky_inc-e4337c25-d1ab-4a6e-8506-4014bea0c0d8-8
Function: flaky_inc
args: (8)
kwargs: {}
Exception: "ValueError('You hit the error!')"
2023-06-29 15:04:44,537 - distributed.worker - WARNING - Compute Failed
Key: flaky_inc-e4337c25-d1ab-4a6e-8506-4014bea0c0d8-9
Function: flaky_inc
args: (9)
kwargs: {}
Exception: "ValueError('You hit the error!')"
[17]:
55
You will see a lot of warnings, but the computation should eventually succeed.
Why use Futures?¶
The futures API offers a work submission style that can easily emulate the map/reduce paradigm. If that is familiar to you then futures might be the simplest entrypoint into Dask.
The other big benefit of futures is that the intermediate results, represented by futures, can be passed to new tasks without having to pull data locally from the cluster. New operations can be setup to work on the output of previous jobs that haven’t even begun yet.