Skip to content

Commit 151de6d

Browse files
authored
ND tensor printing (#509)
* add improved ND tensor printing * [tests] add tests for ND tensor printing * remove outdated disp3d, disp4d * add custom precision option to `pretty`, some proc ↦ func * `$` use `pretty` internally for custom precision support, CUDA tensors * add some Tensor[float] tests including custom precision * fix up tensor prints in tutorials * add tutorial section about pretty printing of tensors of rank N>2 * [tests] only run float tests for nim >= 1.4 * allow default precision -1 as it's what `$` uses Otherwise this introduces breaking changes of tensor printing (and is ugly) * update test to reflect precision=-1 allowed
1 parent a95503e commit 151de6d

10 files changed

Lines changed: 526 additions & 215 deletions

File tree

docs/tuto.broadcasting.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,11 @@ beginning with a dot:
1212
let k = [0, 1, 2].toTensor.reshape(1,3)
1313
1414
echo j +. k
15-
# Tensor of shape 4x3 of type "int" on backend "Cpu"
16-
# |0 1 2|
17-
# |10 11 12|
18-
# |20 21 22|
19-
# |30 31 32|
15+
# Tensor[int] of shape "[4, 3]" on backend "Cpu"
16+
# |0 1 2|
17+
# |10 11 12|
18+
# |20 21 22|
19+
# |30 31 32|
2020
2121
- ``+.``,\ ``-.``,
2222
- ``*.``: broadcasted element-wise matrix multiplication also called

docs/tuto.first_steps.rst

Lines changed: 88 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ a sequence of numbers of steps to get the next item along a dimension. -
2222
let d = [[1, 2, 3], [4, 5, 6]].toTensor()
2323
2424
echo d
25-
# Tensor of shape 2x3 of type "int" on backend "Cpu"
25+
# Tensor[int] of shape "[2, 3]" on backend "Cpu"
2626
# |1 2 3|
2727
# |4 5 6|
2828
@@ -64,9 +64,10 @@ arrays of sequences.
6464
].toTensor()
6565
echo c
6666
67-
# Tensor of shape 4x2x3 of type "int" on backend "Cpu"
68-
# | 1 2 3 | 11 22 33 | 111 222 333 | 1111 2222 3333|
69-
# | 4 5 6 | 44 55 66 | 444 555 666 | 4444 5555 6666|
67+
# Tensor[system.int] of shape "[4, 2, 3]" on backend "Cpu"
68+
# 0 1 2 3
69+
# |1 2 3| |11 22 33| |111 222 333| |1111 2222 3333|
70+
# |4 5 6| |44 55 66| |444 555 666| |4444 5555 6666|
7071
7172
``newTensor`` procedure can be used to initialize a tensor of a specific
7273
shape with a default value. (0 for numbers, false for bool …)
@@ -80,34 +81,34 @@ tensor of the same shape but filled with 0 and 1 respectively.
8081
.. code:: nim
8182
8283
let e = newTensor[bool]([2, 3])
83-
# Tensor of shape 2x3 of type "bool" on backend "Cpu"
84+
# Tensor[bool] of shape "[2, 3]" on backend "Cpu"
8485
# |false false false|
8586
# |false false false|
8687
8788
let f = zeros[float]([4, 3])
88-
# Tensor of shape 4x3 of type "float" on backend "Cpu"
89+
# Tensor[float] of shape "[4, 3]" on backend "Cpu"
8990
# |0.0 0.0 0.0|
9091
# |0.0 0.0 0.0|
9192
# |0.0 0.0 0.0|
9293
# |0.0 0.0 0.0|
9394
9495
let g = ones[float]([4, 3])
95-
# Tensor of shape 4x3 of type "float" on backend "Cpu"
96-
# |1.0 1.0 1.0|
97-
# |1.0 1.0 1.0|
98-
# |1.0 1.0 1.0|
99-
# |1.0 1.0 1.0|
96+
# Tensor[float] of shape "[4, 3]" on backend "Cpu"
97+
# |1.0 1.0 1.0|
98+
# |1.0 1.0 1.0|
99+
# |1.0 1.0 1.0|
100+
# |1.0 1.0 1.0|
100101
101102
let tmp = [[1,2],[3,4]].toTensor()
102103
let h = tmp.zeros_like
103-
# Tensor of shape 2x2 of type "int" on backend "Cpu"
104+
# Tensor[int] of shape "[2, 2]" on backend "Cpu"
104105
# |0 0|
105106
# |0 0|
106107
107108
let i = tmp.ones_like
108-
# Tensor of shape 2x2 of type "int" on backend "Cpu"
109-
# |1 1|
110-
# |1 1|
109+
# Tensor[int] of shape "[2, 2]" on backend "Cpu"
110+
# |1 1|
111+
# |1 1|
111112
112113
Accessing and modifying a value
113114
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -119,20 +120,22 @@ Tensors value can be retrieved or set with array brackets.
119120
var a = toSeq(1..24).toTensor().reshape(2,3,4)
120121
121122
echo a
122-
# Tensor of shape 2x3x4 of type "int" on backend "Cpu"
123-
# | 1 2 3 4 | 13 14 15 16|
124-
# | 5 6 7 8 | 17 18 19 20|
125-
# | 9 10 11 12 | 21 22 23 24|
123+
# Tensor[system.int] of shape "[2, 3, 4]" on backend "Cpu"
124+
# 0 1
125+
# |1 2 3 4| |13 14 15 16|
126+
# |5 6 7 8| |17 18 19 20|
127+
# |9 10 11 12| |21 22 23 24|
126128
127129
echo a[1, 1, 1]
128130
# 18
129131
130132
a[1, 1, 1] = 999
131133
echo a
132-
# Tensor of shape 2x3x4 of type "int" on backend "Cpu"
133-
# | 1 2 3 4 | 13 14 15 16|
134-
# | 5 6 7 8 | 17 999 19 20|
135-
# | 9 10 11 12 | 21 22 23 24|
134+
# Tensor[system.int] of shape "[2, 3, 4]" on backend "Cpu"
135+
# 0 1
136+
# |1 2 3 4| |13 14 15 16|
137+
# |5 6 7 8| |17 999 19 20|
138+
# |9 10 11 12| |21 22 23 24|
136139
137140
Copying
138141
~~~~~~~
@@ -147,3 +150,65 @@ Full copy must be explicitly requested via the ``clone`` function.
147150
Here modifying ``b`` WILL modify ``a``.
148151
This behaviour is the same as Numpy and Julia,
149152
reasons can be found in the following `under the hood article<https://mratsim.github.io/Arraymancer/uth.copy_semantics.html>`_.
153+
154+
Tensor printing
155+
~~~~~~~~~~~~~~~
156+
157+
As already seen in the examples above, printing of tensors of
158+
arbitrary dimensionality is supported. For dimensions larger than 2 we
159+
need to pick a way to represent them on a 2D screen.
160+
161+
We pick a representation that is possibly the most "natural"
162+
generalization of pretty printing up to 2 dimensions. Consider the
163+
following:
164+
165+
- A scalar is of "even" rank (0) and is printed as a 1x1 grid.
166+
- A vector (*odd* rank 1) is represented by a *row* of scalars. That
167+
is a stacking of dimension N - 1 along the horizontal axis.
168+
- A matrix (*even* rank 2) is represented by *stacking* rows of
169+
vectors. That is we extend along the *vertical* axis of elements of
170+
dimension N - 1.
171+
172+
From here we continue along the pattern:
173+
- Odd dimensions N are *horizontal* stacks of the pretty print of N - 1
174+
- Even dimensions N are *vertical* stacks of the pretty print of N - 1
175+
176+
To help with visibility separators ``|`` and ``-`` are applied between
177+
stacks of different dimensions.
178+
179+
This yields a final 2D table of numbers where the dimension
180+
"increases" from outside to inside.
181+
182+
If we have a tensor of shape ``[2, 3, 4, 3, 2]`` the most "outer"
183+
layer is the first ``2``. As it is an odd dimension, this rank will be
184+
stacked horizontally. The next dimension ``3`` will be a stack in
185+
vertical. Inside of that are ``4`` horizontal stacks again until we
186+
reach the last two dimensions ``[3, 2]``, which are simply printed as
187+
expected for a 2D tensor.
188+
189+
To help with readability, the *index* of each of these dimensions is
190+
printed on the top (odd dimension) / left (even dimension) of the
191+
layer.
192+
193+
Take a look at the printing result of the aforementioned shape and try
194+
to understand the indexing shown on the top / right and how it relates
195+
to the different dimensions:
196+
197+
let t1 = toSeq(1..144).toTensor().reshape(2,3,4,3,2)
198+
# Tensor[system.int] of shape "[2, 3, 4, 3, 2]" on backend "Cpu"
199+
# 0 | 1
200+
# 0 1 2 3 | 0 1 2 3
201+
# |1 2| |7 8| |13 14| |19 20| | |73 74| |79 80| |85 86| |91 92|
202+
# 0 |3 4| |9 10| |15 16| |21 22| | 0 |75 76| |81 82| |87 88| |93 94|
203+
# |5 6| |11 12| |17 18| |23 24| | |77 78| |83 84| |89 90| |95 96|
204+
# --------------------------------------------------- | ---------------------------------------------------
205+
# 0 1 2 3 | 0 1 2 3
206+
# |25 26| |31 32| |37 38| |43 44| | |97 98| |103 104| |109 110| |115 116|
207+
# 1 |27 28| |33 34| |39 40| |45 46| | 1 |99 100| |105 106| |111 112| |117 118|
208+
# |29 30| |35 36| |41 42| |47 48| | |101 102| |107 108| |113 114| |119 120|
209+
# --------------------------------------------------- | ---------------------------------------------------
210+
# 0 1 2 3 | 0 1 2 3
211+
# |49 50| |55 56| |61 62| |67 68| | |121 122| |127 128| |133 134| |139 140|
212+
# 2 |51 52| |57 58| |63 64| |69 70| | 2 |123 124| |129 130| |135 136| |141 142|
213+
# |53 54| |59 60| |65 66| |71 72| | |125 126| |131 132| |137 138| |143 144|
214+
# --------------------------------------------------- | ---------------------------------------------------

docs/tuto.iterators.rst

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,11 @@ Tensors can be iterated in the proper order. Arraymancer provides:
1212
import ../arraymancer, sequtils
1313
1414
let a = toSeq(1..24).toTensor.reshape(2,3,4)
15-
# Tensor of shape 2x3x4 of type "int" on backend "Cpu"
16-
# | 1 2 3 4 | 13 14 15 16|
17-
# | 5 6 7 8 | 17 18 19 20|
18-
# | 9 10 11 12 | 21 22 23 24|
15+
# Tensor[system.int] of shape "[2, 3, 4]" on backend "Cpu"
16+
# 0 1
17+
# |1 2 3 4| |13 14 15 16|
18+
# |5 6 7 8| |17 18 19 20|
19+
# |9 10 11 12| |21 22 23 24|
1920
2021
for v in a:
2122
echo v

docs/tuto.linear_algebra.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ routines, see the `benchmarks section. <#micro-benchmark-int64-matrix-multiplica
2222
.. code:: nim
2323
2424
echo foo_float * foo_float # Accelerated Matrix-Matrix multiplication (needs float)
25-
# Tensor of shape 5x5 of type "float" on backend "Cpu"
26-
# |15.0 55.0 225.0 979.0 4425.0|
27-
# |258.0 1146.0 5274.0 24810.0 118458.0|
28-
# |1641.0 7653.0 36363.0 174945.0 849171.0|
29-
# |6372.0 30340.0 146244.0 710980.0 3478212.0|
30-
# |18555.0 89355.0 434205.0 2123655.0 10436805.0|
25+
# Tensor[float] of shape "[5, 5]" on backend "Cpu"
26+
# |15.0 55.0 225.0 979.0 4425.0|
27+
# |258.0 1146.0 5274.0 24810.0 118458.0|
28+
# |1641.0 7653.0 36363.0 174945.0 849171.0|
29+
# |6372.0 30340.0 146244.0 710980.0 3478212.0|
30+
# |18555.0 89355.0 434205.0 2123655.0 10436805.0|

docs/tuto.shapeshifting.rst

Lines changed: 32 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,14 @@ For example:
2323
2424
let a = toSeq(1..24).toTensor().reshape(2,3,4)
2525
26-
# Tensor of shape 2x3x4 of type "int" on backend "Cpu"
27-
#  | 1 2 3 4 | 13 14 15 16|
28-
#  | 5 6 7 8 | 17 18 19 20|
29-
#  | 9 10 11 12 | 21 22 23 24|
26+
# Tensor[system.int] of shape "[2, 3, 4]" on backend "Cpu"
27+
# 0 1
28+
# |1 2 3 4| |13 14 15 16|
29+
# |5 6 7 8| |17 18 19 20|
30+
# |9 10 11 12| |21 22 23 24|
31+
32+
The ``0`` and ``1`` correspond to the index along the first dimension
33+
of the reshaped tensor.
3034

3135
Permuting - Reordering dimension
3236
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -39,18 +43,21 @@ tensor and the new dimension order
3943
let a = toSeq(1..24).toTensor.reshape(2,3,4)
4044
echo a
4145
42-
# Tensor of shape 2x3x4 of type "int" on backend "Cpu"
43-
# | 1 2 3 4 | 13 14 15 16|
44-
# | 5 6 7 8 | 17 18 19 20|
45-
# | 9 10 11 12 | 21 22 23 24|
46+
# Tensor[system.int] of shape "[2, 3, 4]" on backend "Cpu"
47+
# 0 1
48+
# |1 2 3 4| |13 14 15 16|
49+
# |5 6 7 8| |17 18 19 20|
50+
# |9 10 11 12| |21 22 23 24|
4651
47-
echo a.permute(0,2,1) # dim 0 stays at 0, dim 1 becomes dim 2 and dim 2 becomes dim 1
52+
echo a.permute(0,2,1) # dim 0 stays at 0, dim 1 becomes dim 2 and
53+
dim 2 becomes dim 1
4854
49-
# Tensor of shape 2x4x3 of type "int" on backend "Cpu"
50-
# | 1 5 9 | 13 17 21|
51-
# | 2 6 10 | 14 18 22|
52-
# | 3 7 11 | 15 19 23|
53-
# | 4 8 12 | 16 20 24|
55+
# Tensor[system.int] of shape "[2, 4, 3]" on backend "Cpu"
56+
# 0 1
57+
# |1 5 9| |13 17 21|
58+
# |2 6 10| |14 18 22|
59+
# |3 7 11| |15 19 23|
60+
# |4 8 12| |16 20 24|
5461
5562
Concatenation
5663
^^^^^^^^^^^^^
@@ -71,16 +78,16 @@ Tensors can be concatenated along an axis with the ``concat`` proc.
7178
let c1 = c.reshape(2,3)
7279
7380
echo concat(a,b,c0, axis = 0)
74-
# Tensor of shape 7x2 of type "int" on backend "Cpu"
75-
# |1 2|
76-
# |3 4|
77-
# |5 6|
78-
# |7 8|
79-
# |11 12|
80-
# |13 14|
81-
# |15 16|
81+
# Tensor[system.int] of shape "[7, 2]" on backend "Cpu"
82+
# |1 2|
83+
# |3 4|
84+
# |5 6|
85+
# |7 8|
86+
# |11 12|
87+
# |13 14|
88+
# |15 16|
8289
8390
echo concat(a,b,c1, axis = 1)
84-
# Tensor of shape 2x7 of type "int" on backend "Cpu"
85-
# |1 2 5 6 11 12 13|
86-
# |3 4 7 8 14 15 16|
91+
# Tensor[system.int] of shape "[2, 7]" on backend "Cpu"
92+
# |1 2 5 6 11 12 13|
93+
# |3 4 7 8 14 15 16|

0 commit comments

Comments
 (0)