Skip to content

Commit 16c13a4

Browse files
AlvaroGIbasnijholt
authored andcommitted
Tutorial and docs updated to .rst
The docs and tutorial have been updated to include the AverageLearner1D. The previous tutorial (Python notebook) has been replaced by new tutorial as .rst file.
1 parent 36c4b60 commit 16c13a4

8 files changed

Lines changed: 234 additions & 219 deletions

README.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,10 @@ The following learners are implemented:
4444
- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
4545
- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
4646
- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
47-
- ``AverageLearner``, For stochastic functions where you want to
47+
- ``AverageLearner``, for random variables where you want to
4848
average the result over many evaluations,
49+
- ``AverageLearner1D``, for stochastic 1D functions where you want to
50+
estimate the mean value of the function at each point,
4951
- ``IntegratorLearner``, for
5052
when you want to intergrate a 1D function ``f: ℝ → ℝ``,
5153
- ``BalancingLearner``, for when you want to run several learners at once,

docs/source/docs.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,10 @@ The following learners are implemented:
1616
- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``,
1717
- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``,
1818
- `~adaptive.LearnerND`, for ND functions ``f: ℝ^N → ℝ^M``,
19-
- `~adaptive.AverageLearner`, For stochastic functions where you want to
19+
- `~adaptive.AverageLearner`, for random variables where you want to
2020
average the result over many evaluations,
21+
- `~adaptive.AverageLearner1D`, for stochastic 1D functions where you want to
22+
estimate the mean value of the function at each point,
2123
- `~adaptive.IntegratorLearner`, for
2224
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
2325

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
adaptive.AverageLearner
2+
=======================
3+
4+
.. autoclass:: adaptive.AverageLearner1D
5+
:members:
6+
:undoc-members:
7+
:show-inheritance:

docs/source/reference/adaptive.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ Learners
77
.. toctree::
88

99
adaptive.learner.average_learner
10+
adaptive.learner.average_learner1D
1011
adaptive.learner.base_learner
1112
adaptive.learner.balancing_learner
1213
adaptive.learner.data_saver
Lines changed: 116 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
Tutorial `~adaptive.AverageLearner1D`
2+
------------------------------
3+
4+
.. note::
5+
Because this documentation consists of static html, the ``live_plot``
6+
and ``live_info`` widget is not live. Download the notebook
7+
in order to see the real behaviour.
8+
9+
.. seealso::
10+
The complete source code of this tutorial can be found in
11+
:jupyter-download:notebook:`tutorial.AverageLearner1D`
12+
13+
.. jupyter-execute::
14+
:hide-code:
15+
16+
import adaptive
17+
adaptive.notebook_extension()
18+
%config InlineBackend.figure_formats=set(['svg'])
19+
20+
import numpy as np
21+
from functools import partial
22+
import random
23+
24+
General use
25+
..........................
26+
27+
First, we define the (noisy) function to be sampled. Note that the parameter
28+
``sigma`` corresponds to the standard deviation of the Gaussian noise.
29+
30+
.. jupyter-execute::
31+
32+
def f(x, sigma=0, peak_width=0.05, offset=-0.5, wait=False):
33+
from time import sleep
34+
from random import random
35+
36+
if wait:
37+
sleep(random())
38+
39+
function = x ** 3 - x + 3 * peak_width ** 2 / (peak_width ** 2 + (x - offset) ** 2)
40+
return function + np.random.normal(0, sigma)
41+
42+
This is how the function looks in the absence of noise:
43+
44+
.. jupyter-execute::
45+
46+
import matplotlib.pyplot as plt
47+
x = np.linspace(-2,2,500)
48+
plt.plot(x, f(x, sigma=0));
49+
50+
This is how a single realization of the noisy function looks:
51+
52+
.. jupyter-execute::
53+
54+
plt.plot(x, [f(xi, sigma=1) for xi in x]);
55+
56+
To obtain an estimate of the mean value of the function at each point ``x``, we
57+
take many samples at ``x`` and calculate the sample mean. The learner will
58+
autonomously determine whether the next samples should be taken at an old
59+
point (to improve the estimate of the mean at that point) or at a new one.
60+
61+
We start by initializing a 1D average learner:
62+
63+
.. jupyter-execute::
64+
65+
learner = adaptive.AverageLearner1D(
66+
function=partial(f, sigma=1),
67+
bounds=(-2,2))
68+
69+
As with other types of learners, we need to initialize a runner with a certain
70+
goal to run our learner. In this case, we set 10000 samples as the goal (the
71+
second condition ensures that we have at least 20 samples at each point):
72+
73+
.. jupyter-execute::
74+
75+
runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 10000 and min(l._number_samples.values()) >= 20)
76+
runner.live_info()
77+
runner.live_plot(update_interval=0.1)
78+
79+
Fine tuning
80+
..........................
81+
82+
In some cases, the default configuration of the 1D average learner can be
83+
sub-optimal. One can then tune the internal parameters of the learner. The most
84+
relevant are:
85+
86+
- ``loss_per_interval``: loss function (see Learner1D).
87+
- ``delta``: this parameter is the most relevant and controls the balance between resampling existing points (exploitation) and sampling new ones (exploration). Its value should remain between 0 and 1 (the default value is 0.2). Large values favor the "exploration" behavior, although this can make the learner to sample noise. Small values favor the "exploitation" behavior, leading the learner to thoroughly resample existing points. In general, the optimal value of ``delta`` is between 0.1 and 0.4.
88+
- ``neighbor_sampling``: each new point is initially sampled a fraction ``neighbor_sampling`` of the number of samples of its nearest neighbor. We recommend to keep the value of ``neighbor_sampling`` below 1 to prevent oversampling.
89+
- ``min_samples``: minimum number of samples that are initially taken at a new point. This parameter can prevent the learner from sampling noise in case we accidentally set a too large value of ``delta``.
90+
- ``max_samples``: maximum number of samples at each point. If a point has been sampled ``max_samples`` times, it will not be sampled again. This prevents the "exploitation" to drastically dominate over the "exploration" behavior in case we set a too small ``delta``.
91+
- ``min_error``: minimum uncertainty at each point (this uncertainty corresponds to the standard deviation in the estimate of the mean). As ``max_samples``, this parameter can prevent the "exploitation" to drastically dominate over the "exploration" behavior.
92+
93+
As an example, assume that we wanted to resample the points from the previous
94+
learner. We can decrease ``delta`` to 0.1 and set ``min_error`` to 0.05 if we do
95+
not require accuracy beyond this value:
96+
97+
.. jupyter-execute::
98+
99+
learner.delta = 0.1
100+
learner.min_error = 0.05
101+
102+
runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 20000 and min(l._number_samples.values()) >= 20)
103+
runner.live_info()
104+
runner.live_plot(update_interval=0.1)
105+
106+
On the contrary, if we want to push forward the "exploration", we can set a larger
107+
``delta`` and limit the maximum number of samples taken at each point:
108+
109+
.. jupyter-execute::
110+
111+
learner.delta = 0.3
112+
learner.max_samples = 1000
113+
114+
runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 25000 and min(l._number_samples.values()) >= 20)
115+
runner.live_info()
116+
runner.live_plot(update_interval=0.1)

docs/source/tutorial/tutorial.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ We recommend to start with the :ref:`Tutorial `~adaptive.Learner1D``.
2121
tutorial.Learner2D
2222
tutorial.custom_loss
2323
tutorial.AverageLearner
24+
tutorial.AverageLearner1D
2425
tutorial.BalancingLearner
2526
tutorial.DataSaver
2627
tutorial.IntegratorLearner

example-notebook.ipynb

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -290,6 +290,109 @@
290290
"runner.live_plot(update_interval=0.1)"
291291
]
292292
},
293+
{
294+
"cell_type": "markdown",
295+
"metadata": {},
296+
"source": [
297+
"# Average 1D learner"
298+
]
299+
},
300+
{
301+
"cell_type": "markdown",
302+
"metadata": {},
303+
"source": [
304+
"[`adaptive`](https://github.com/python-adaptive/adaptive) can also be used to sample noisy functions. The `AverageLearner1D` estimates the mean value of a 1D stochastic function by taking many samples at different points and estimating the mean value at those points.\n",
305+
"\n",
306+
"Let us consider the following function:"
307+
]
308+
},
309+
{
310+
"cell_type": "code",
311+
"execution_count": null,
312+
"metadata": {},
313+
"outputs": [],
314+
"source": [
315+
"def f(x, sigma=0, peak_width=0.05, offset=-0.5, wait=False):\n",
316+
" from time import sleep\n",
317+
" from random import random\n",
318+
"\n",
319+
" if wait:\n",
320+
" sleep(random())\n",
321+
"\n",
322+
" function = x ** 3 - x + 3 * peak_width ** 2 / (peak_width ** 2 + (x - offset) ** 2)\n",
323+
" return function + np.random.normal(0, sigma)"
324+
]
325+
},
326+
{
327+
"cell_type": "markdown",
328+
"metadata": {},
329+
"source": [
330+
"This is how the function looks in the absence of noise:"
331+
]
332+
},
333+
{
334+
"cell_type": "code",
335+
"execution_count": null,
336+
"metadata": {},
337+
"outputs": [],
338+
"source": [
339+
"import matplotlib.pyplot as plt\n",
340+
"x = np.linspace(-2,2,500)\n",
341+
"plt.plot(x, f(x, sigma=0));"
342+
]
343+
},
344+
{
345+
"cell_type": "markdown",
346+
"metadata": {},
347+
"source": [
348+
"This is how a single realization of the stochastic function looks:"
349+
]
350+
},
351+
{
352+
"cell_type": "code",
353+
"execution_count": null,
354+
"metadata": {},
355+
"outputs": [],
356+
"source": [
357+
"plt.plot(x, [f(xi, sigma=1) for xi in x]);"
358+
]
359+
},
360+
{
361+
"cell_type": "markdown",
362+
"metadata": {},
363+
"source": [
364+
"The `AverageLearner1D` can be run in a similar way to the `Learner1D`:"
365+
]
366+
},
367+
{
368+
"cell_type": "code",
369+
"execution_count": null,
370+
"metadata": {},
371+
"outputs": [],
372+
"source": [
373+
"learner = adaptive.AverageLearner1D(function=partial(f, sigma=1), bounds=(-2,2))\n",
374+
"\n",
375+
"runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 10000 \n",
376+
" and min(l._number_samples.values()) >= 20)\n",
377+
"runner.live_info()"
378+
]
379+
},
380+
{
381+
"cell_type": "markdown",
382+
"metadata": {},
383+
"source": [
384+
"The live plot shows the mean value of the function at each point and errorbars that correspond to the standard deviation on the estimate of the mean value:"
385+
]
386+
},
387+
{
388+
"cell_type": "code",
389+
"execution_count": null,
390+
"metadata": {},
391+
"outputs": [],
392+
"source": [
393+
"runner.live_plot(update_interval=0.1)"
394+
]
395+
},
293396
{
294397
"cell_type": "markdown",
295398
"metadata": {},

0 commit comments

Comments
 (0)