@@ -8,11 +8,10 @@ Tutorial `~adaptive.Learner1D`
88
99.. seealso ::
1010 The complete source code of this tutorial can be found in
11- :jupyter-download:notebook: `Learner1D `
11+ :jupyter-download:notebook: `tutorial. Learner1D `
1212
13- .. execute ::
13+ .. jupyter- execute ::
1414 :hide-code:
15- :new-notebook: Learner1D
1615
1716 import adaptive
1817 adaptive.notebook_extension()
@@ -30,7 +29,7 @@ We start with the most common use-case: sampling a 1D function
3029We will use the following function, which is a smooth (linear)
3130background with a sharp peak at a random location:
3231
33- .. execute ::
32+ .. jupyter- execute ::
3433
3534 offset = random.uniform(-0.5, 0.5)
3635
@@ -47,7 +46,7 @@ We start by initializing a 1D “learner”, which will suggest points to
4746evaluate, and adapt its suggestions as more and more points are
4847evaluated.
4948
50- .. execute ::
49+ .. jupyter- execute ::
5150
5251 learner = adaptive.Learner1D(f, bounds=(-1, 1))
5352
@@ -61,13 +60,13 @@ On Windows systems the runner will try to use a `distributed.Client`
6160if `distributed ` is installed. A `~concurrent.futures.ProcessPoolExecutor `
6261cannot be used on Windows for reasons.
6362
64- .. execute ::
63+ .. jupyter- execute ::
6564
6665 # The end condition is when the "loss" is less than 0.1. In the context of the
6766 # 1D learner this means that we will resolve features in 'func' with width 0.1 or wider.
6867 runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
6968
70- .. execute ::
69+ .. jupyter- execute ::
7170 :hide-code:
7271
7372 await runner.task # This is not needed in a notebook environment!
@@ -76,23 +75,23 @@ When instantiated in a Jupyter notebook the runner does its job in the
7675background and does not block the IPython kernel. We can use this to
7776create a plot that updates as new data arrives:
7877
79- .. execute ::
78+ .. jupyter- execute ::
8079
8180 runner.live_info()
8281
83- .. execute ::
82+ .. jupyter- execute ::
8483
8584 runner.live_plot(update_interval=0.1)
8685
8786We can now compare the adaptive sampling to a homogeneous sampling with
8887the same number of points:
8988
90- .. execute ::
89+ .. jupyter- execute ::
9190
9291 if not runner.task.done():
9392 raise RuntimeError('Wait for the runner to finish before executing the cells below!')
9493
95- .. execute ::
94+ .. jupyter- execute ::
9695
9796 learner2 = adaptive.Learner1D(f, bounds=learner.bounds)
9897
@@ -107,7 +106,7 @@ vector output: ``f:ℝ → ℝ^N``
107106
108107Sometimes you may want to learn a function with vector output:
109108
110- .. execute ::
109+ .. jupyter- execute ::
111110
112111 random.seed(0)
113112 offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]
@@ -121,20 +120,20 @@ Sometimes you may want to learn a function with vector output:
121120``adaptive `` has you covered! The ``Learner1D `` can be used for such
122121functions:
123122
124- .. execute ::
123+ .. jupyter- execute ::
125124
126125 learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))
127126 runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
128127
129- .. execute ::
128+ .. jupyter- execute ::
130129 :hide-code:
131130
132131 await runner.task # This is not needed in a notebook environment!
133132
134- .. execute ::
133+ .. jupyter- execute ::
135134
136135 runner.live_info()
137136
138- .. execute ::
137+ .. jupyter- execute ::
139138
140139 runner.live_plot(update_interval=0.1)
0 commit comments