Skip to content

Commit eef11f0

Browse files
committed
add doc files
1 parent fcb7820 commit eef11f0

36 files changed

Lines changed: 426 additions & 179 deletions

README.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ chat at https://gitter.im/python-adaptive/adaptive|
1010

1111
**Tools for adaptive parallel sampling of mathematical functions.**
1212

13-
``adaptive`` is an `open-source <LICENSE>`_ Python library designed to
13+
``adaptive`` is an open-source Python library designed to
1414
make adaptive parallel function evaluation simple. With ``adaptive`` you
1515
just supply a function with its bounds, and it will be evaluated at the
1616
“best” points in parameter space. With just a few lines of code you can
1717
evaluate functions on a computing cluster, live-plot the data as it
1818
returns, and fine-tune the adaptive sampling algorithm.
1919

2020
Check out the ``adaptive`` example notebook
21-
`learner.ipynb <learner.ipynb>`_ (or run it `live on
21+
`learner.ipynb <https://github.com/python-adaptive/adaptive/blob/master/learner.ipynb>`_ (or run it `live on
2222
Binder <https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb>`_)
2323
to see examples of how to use ``adaptive``.
2424

@@ -127,7 +127,7 @@ We would like to give credits to the following people:
127127
Mathematical Software, 37 (3), art. no. 26, 2010.
128128
- Pauli Virtanen for his ``AdaptiveTriSampling`` script (no longer
129129
available online since SciPy Central went down) which served as
130-
inspiration for the `Learner2D <adaptive/learner/learner2D.py>`_.
130+
inspiration for the ``~adaptive.Learner2D``.
131131

132132
For general discussion, we have a `Gitter chat
133133
channel <https://gitter.im/python-adaptive/adaptive>`_. If you find any

adaptive/__init__.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,15 +8,15 @@
88
from . import runner
99
from . import utils
1010

11-
from .learner import (Learner1D, Learner2D, LearnerND, AverageLearner,
12-
BalancingLearner, make_datasaver, DataSaver,
13-
IntegratorLearner)
11+
from .learner import (BaseLearner, Learner1D, Learner2D, LearnerND,
12+
AverageLearner, BalancingLearner, make_datasaver,
13+
DataSaver, IntegratorLearner)
1414

1515
with suppress(ImportError):
1616
# Only available if 'scikit-optimize' is installed
1717
from .learner import SKOptLearner
1818

19-
from .runner import Runner, BlockingRunner
19+
from .runner import Runner, AsyncRunner, BlockingRunner
2020

2121
from ._version import __version__
2222
del _version

adaptive/learner/average_learner.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@ class AverageLearner(BaseLearner):
1717
Parameters
1818
----------
1919
atol : float
20-
Desired absolute tolerance
20+
Desired absolute tolerance.
2121
rtol : float
22-
Desired relative tolerance
22+
Desired relative tolerance.
2323
2424
Attributes
2525
----------

adaptive/learner/balancing_learner.py

Lines changed: 24 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -21,27 +21,32 @@ class BalancingLearner(BaseLearner):
2121
2222
Parameters
2323
----------
24-
learners : sequence of BaseLearner
24+
learners : sequence of `BaseLearner`
2525
The learners from which to choose. These must all have the same type.
2626
cdims : sequence of dicts, or (keys, iterable of values), optional
2727
Constant dimensions; the parameters that label the learners. Used
2828
in `plot`.
2929
Example inputs that all give identical results:
30+
3031
- sequence of dicts:
32+
3133
>>> cdims = [{'A': True, 'B': 0},
3234
... {'A': True, 'B': 1},
3335
... {'A': False, 'B': 0},
3436
... {'A': False, 'B': 1}]`
37+
3538
- tuple with (keys, iterable of values):
39+
3640
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
3741
>>> cdims = (['A', 'B'], [(True, 0), (True, 1),
3842
... (False, 0), (False, 1)])
43+
3944
strategy : 'loss_improvements' (default), 'loss', or 'npoints'
40-
The points that the 'BalancingLearner' choses can be either based on:
45+
The points that the `BalancingLearner` choses can be either based on:
4146
the best 'loss_improvements', the smallest total 'loss' of the
4247
child learners, or the number of points per learner, using 'npoints'.
4348
One can dynamically change the strategy while the simulation is
44-
running by changing the 'learner.strategy' attribute.
49+
running by changing the ``learner.strategy`` attribute.
4550
4651
Notes
4752
-----
@@ -50,7 +55,7 @@ class BalancingLearner(BaseLearner):
5055
compared*. For the moment we enforce this restriction by requiring that
5156
all learners are the same type but (depending on the internals of the
5257
learner) it may be that the loss cannot be compared *even between learners
53-
of the same type*. In this case the BalancingLearner will behave in an
58+
of the same type*. In this case the `BalancingLearner` will behave in an
5459
undefined way.
5560
"""
5661

@@ -183,28 +188,34 @@ def plot(self, cdims=None, plotter=None, dynamic=True):
183188
cdims : sequence of dicts, or (keys, iterable of values), optional
184189
Constant dimensions; the parameters that label the learners.
185190
Example inputs that all give identical results:
191+
186192
- sequence of dicts:
193+
187194
>>> cdims = [{'A': True, 'B': 0},
188195
... {'A': True, 'B': 1},
189196
... {'A': False, 'B': 0},
190197
... {'A': False, 'B': 1}]`
198+
191199
- tuple with (keys, iterable of values):
200+
192201
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
193202
>>> cdims = (['A', 'B'], [(True, 0), (True, 1),
194203
... (False, 0), (False, 1)])
204+
195205
plotter : callable, optional
196206
A function that takes the learner as a argument and returns a
197-
holoviews object. By default learner.plot() will be called.
207+
holoviews object. By default ``learner.plot()`` will be called.
198208
dynamic : bool, default True
199-
Return a holoviews.DynamicMap if True, else a holoviews.HoloMap.
200-
The DynamicMap is rendered as the sliders change and can therefore
201-
not be exported to html. The HoloMap does not have this problem.
209+
Return a `holoviews.core.DynamicMap` if True, else a
210+
`holoviews.core.HoloMap`. The `~holoviews.core.DynamicMap` is
211+
rendered as the sliders change and can therefore not be exported
212+
to html. The `~holoviews.core.HoloMap` does not have this problem.
202213
203214
Returns
204215
-------
205-
dm : holoviews.DynamicMap object (default) or holoviews.HoloMap object
206-
A DynamicMap (dynamic=True) or HoloMap (dynamic=False) with
207-
sliders that are defined by 'cdims'.
216+
dm : `holoviews.core.DynamicMap` (default) or `holoviews.core.HoloMap`
217+
A `DynamicMap` (dynamic=True) or `HoloMap` (dynamic=False) with
218+
sliders that are defined by `cdims`.
208219
"""
209220
hv = ensure_holoviews()
210221
cdims = cdims or self._cdims_default
@@ -248,13 +259,13 @@ def remove_unfinished(self):
248259
def from_product(cls, f, learner_type, learner_kwargs, combos):
249260
"""Create a `BalancingLearner` with learners of all combinations of
250261
named variables’ values. The `cdims` will be set correctly, so calling
251-
`learner.plot` will be a `holoviews.HoloMap` with the correct labels.
262+
`learner.plot` will be a `holoviews.core.HoloMap` with the correct labels.
252263
253264
Parameters
254265
----------
255266
f : callable
256267
Function to learn, must take arguments provided in in `combos`.
257-
learner_type : BaseLearner
268+
learner_type : `BaseLearner`
258269
The learner that should wrap the function. For example `Learner1D`.
259270
learner_kwargs : dict
260271
Keyword argument for the `learner_type`. For example `dict(bounds=[0, 1])`.

adaptive/learner/base_learner.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,14 +11,16 @@ class BaseLearner(metaclass=abc.ABCMeta):
1111
function : callable: X → Y
1212
The function to learn.
1313
data : dict: X → Y
14-
'function' evaluated at certain points.
14+
`function` evaluated at certain points.
1515
The values can be 'None', which indicates that the point
1616
will be evaluated, but that we do not have the result yet.
1717
npoints : int, optional
1818
The number of evaluated points that have been added to the learner.
1919
Subclasses do not *have* to implement this attribute.
2020
21-
Subclasses may define a 'plot' method that takes no parameters
21+
Notes
22+
-----
23+
Subclasses may define a ``plot`` method that takes no parameters
2224
and returns a holoviews plot.
2325
"""
2426

@@ -75,9 +77,8 @@ def ask(self, n, tell_pending=True):
7577
n : int
7678
The number of points to choose.
7779
tell_pending : bool, default: True
78-
If True, add the chosen points to this
79-
learner's 'data' with 'None' for the 'y'
80-
values. Set this to False if you do not
80+
If True, add the chosen points to this learner's
81+
`pending_points`. Set this to False if you do not
8182
want to modify the state of the learner.
8283
"""
8384
pass

adaptive/learner/data_saver.py

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -8,18 +8,19 @@ class DataSaver:
88
99
Parameters
1010
----------
11-
learner : Learner object
11+
learner : `~adaptive.BaseLearner` instance
1212
The learner that needs to be wrapped.
1313
arg_picker : function
1414
Function that returns the argument that needs to be learned.
1515
1616
Example
1717
-------
1818
Imagine we have a function that returns a dictionary
19-
of the form: `{'y': y, 'err_est': err_est}`.
20-
19+
of the form: ``{'y': y, 'err_est': err_est}``.
20+
21+
>>> from operator import itemgetter
2122
>>> _learner = Learner1D(f, bounds=(-1.0, 1.0))
22-
>>> learner = DataSaver(_learner, arg_picker=operator.itemgetter('y'))
23+
>>> learner = DataSaver(_learner, arg_picker=itemgetter('y'))
2324
"""
2425

2526
def __init__(self, learner, arg_picker):
@@ -46,28 +47,29 @@ def _ds(learner_type, arg_picker, *args, **kwargs):
4647

4748

4849
def make_datasaver(learner_type, arg_picker):
49-
"""Create a DataSaver of a `learner_type` that can be instantiated
50+
"""Create a `DataSaver` of a `learner_type` that can be instantiated
5051
with the `learner_type`'s key-word arguments.
5152
5253
Parameters
5354
----------
54-
learner_type : BaseLearner
55+
learner_type : `~adaptive.BaseLearner` type
5556
The learner type that needs to be wrapped.
5657
arg_picker : function
5758
Function that returns the argument that needs to be learned.
5859
5960
Example
6061
-------
6162
Imagine we have a function that returns a dictionary
62-
of the form: `{'y': y, 'err_est': err_est}`.
63+
of the form: ``{'y': y, 'err_est': err_est}``.
6364
64-
>>> DataSaver = make_datasaver(Learner1D,
65-
... arg_picker=operator.itemgetter('y'))
65+
>>> from operator import itemgetter
66+
>>> DataSaver = make_datasaver(Learner1D, arg_picker=itemgetter('y'))
6667
>>> learner = DataSaver(function=f, bounds=(-1.0, 1.0))
6768
68-
Or when using `BalacingLearner.from_product`:
69+
Or when using `adaptive.BalancingLearner.from_product`:
70+
6971
>>> learner_type = make_datasaver(adaptive.Learner1D,
70-
... arg_picker=operator.itemgetter('y'))
72+
... arg_picker=itemgetter('y'))
7173
>>> learner = adaptive.BalancingLearner.from_product(
7274
... jacobi, learner_type, dict(bounds=(0, 1)), combos)
7375
"""

adaptive/learner/integrator_learner.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -330,7 +330,7 @@ def __init__(self, function, bounds, tol):
330330
The integral value in `self.bounds`.
331331
err : float
332332
The absolute error associated with `self.igral`.
333-
max_ivals : int, default 1000
333+
max_ivals : int, default: 1000
334334
Maximum number of intervals that can be present in the calculation
335335
of the integral. If this amount exceeds max_ivals, the interval
336336
with the smallest error will be discarded.

adaptive/learner/learner1D.py

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -94,17 +94,24 @@ class Learner1D(BaseLearner):
9494
If not provided, then a default is used, which uses the scaled distance
9595
in the x-y plane as the loss. See the notes for more details.
9696
97+
Attributes
98+
----------
99+
data : dict
100+
Sampled points and values.
101+
pending_points : set
102+
Points that still have to be evaluated.
103+
97104
Notes
98105
-----
99-
'loss_per_interval' takes 3 parameters: interval, scale, and function_values,
100-
and returns a scalar; the loss over the interval.
106+
`loss_per_interval` takes 3 parameters: ``interval``, ``scale``, and
107+
``function_values``, and returns a scalar; the loss over the interval.
101108
102109
interval : (float, float)
103110
The bounds of the interval.
104111
scale : (float, float)
105112
The x and y scale over all the intervals, useful for rescaling the
106113
interval loss.
107-
function_values : dict(float -> float)
114+
function_values : dict(float float)
108115
A map containing evaluated function values. It is guaranteed
109116
to have values for both of the points in 'interval'.
110117
"""
@@ -363,7 +370,7 @@ def tell_many(self, xs, ys, *, force=False):
363370
x_left, x_right = ival
364371
a, b = to_interpolate[-1] if to_interpolate else (None, None)
365372
if b == x_left and (a, b) not in self.losses:
366-
# join (a, b) and (x_left, x_right) --> (a, x_right)
373+
# join (a, b) and (x_left, x_right) (a, x_right)
367374
to_interpolate[-1] = (a, x_right)
368375
else:
369376
to_interpolate.append((x_left, x_right))

adaptive/learner/learner2D.py

Lines changed: 24 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -62,8 +62,8 @@ def uniform_loss(ip):
6262

6363

6464
def resolution_loss(ip, min_distance=0, max_distance=1):
65-
"""Loss function that is similar to the default loss function, but you can
66-
set the maximimum and minimum size of a triangle.
65+
"""Loss function that is similar to the `default_loss` function, but you
66+
can set the maximimum and minimum size of a triangle.
6767
6868
Works with `~adaptive.Learner2D` only.
6969
@@ -101,8 +101,8 @@ def resolution_loss(ip, min_distance=0, max_distance=1):
101101

102102
def minimize_triangle_surface_loss(ip):
103103
"""Loss function that is similar to the default loss function in the
104-
`Learner1D`. The loss is the area spanned by the 3D vectors of the
105-
vertices.
104+
`~adaptive.Learner1D`. The loss is the area spanned by the 3D
105+
vectors of the vertices.
106106
107107
Works with `~adaptive.Learner2D` only.
108108
@@ -206,15 +206,15 @@ class Learner2D(BaseLearner):
206206
pending_points : set
207207
Points that still have to be evaluated and are currently
208208
interpolated, see `data_combined`.
209-
stack_size : int, default 10
209+
stack_size : int, default: 10
210210
The size of the new candidate points stack. Set it to 1
211211
to recalculate the best points at each call to `ask`.
212-
aspect_ratio : float, int, default 1
213-
Average ratio of `x` span over `y` span of a triangle. If
214-
there is more detail in either `x` or `y` the `aspect_ratio`
215-
needs to be adjusted. When `aspect_ratio > 1` the
216-
triangles will be stretched along `x`, otherwise
217-
along `y`.
212+
aspect_ratio : float, int, default: 1
213+
Average ratio of ``x`` span over ``y`` span of a triangle. If
214+
there is more detail in either ``x`` or ``y`` the ``aspect_ratio``
215+
needs to be adjusted. When ``aspect_ratio > 1`` the
216+
triangles will be stretched along ``x``, otherwise
217+
along ``y``.
218218
219219
Methods
220220
-------
@@ -239,13 +239,13 @@ class Learner2D(BaseLearner):
239239
This sampling procedure is not extremely fast, so to benefit from
240240
it, your function needs to be slow enough to compute.
241241
242-
'loss_per_triangle' takes a single parameter, 'ip', which is a
242+
`loss_per_triangle` takes a single parameter, `ip`, which is a
243243
`scipy.interpolate.LinearNDInterpolator`. You can use the
244-
*undocumented* attributes 'tri' and 'values' of 'ip' to get a
244+
*undocumented* attributes ``tri`` and ``values`` of `ip` to get a
245245
`scipy.spatial.Delaunay` and a vector of function values.
246246
These can be used to compute the loss. The functions
247-
`adaptive.learner.learner2D.areas` and
248-
`adaptive.learner.learner2D.deviations` to calculate the
247+
`~adaptive.learner.learner2D.areas` and
248+
`~adaptive.learner.learner2D.deviations` to calculate the
249249
areas and deviations from a linear interpolation
250250
over each triangle.
251251
"""
@@ -464,19 +464,21 @@ def plot(self, n=None, tri_alpha=0):
464464
Number of points in x and y. If None (default) this number is
465465
evaluated by looking at the size of the smallest triangle.
466466
tri_alpha : float
467-
The opacity (0 <= tri_alpha <= 1) of the triangles overlayed on
468-
top of the image. By default the triangulation is not visible.
467+
The opacity ``(0 <= tri_alpha <= 1)`` of the triangles overlayed
468+
on top of the image. By default the triangulation is not visible.
469469
470470
Returns
471471
-------
472-
plot : holoviews.Overlay or holoviews.HoloMap
473-
A `holoviews.Overlay` of `holoviews.Image * holoviews.EdgePaths`.
474-
If the `learner.function` returns a vector output, a
475-
`holoviews.HoloMap` of the `holoviews.Overlay`s wil be returned.
472+
plot : `holoviews.core.Overlay` or `holoviews.core.HoloMap`
473+
A `holoviews.core.Overlay` of
474+
``holoviews.Image * holoviews.EdgePaths``. If the
475+
`learner.function` returns a vector output, a
476+
`holoviews.core.HoloMap` of the
477+
`holoviews.core.Overlay`\s wil be returned.
476478
477479
Notes
478480
-----
479-
The plot object that is returned if `learner.function` returns a
481+
The plot object that is returned if ``learner.function`` returns a
480482
vector *cannot* be used with the live_plotting functionality.
481483
"""
482484
hv = ensure_holoviews()

0 commit comments

Comments
 (0)