Deep understanding of Tensor data structure#

MegEngine provides a data structure called “Tensor” (Tensor), which is different from the definition in mathematics, and its concept is the same as that in NumPy: py:class:~numpy.ndarray It is more similar, that is, a tensor is a kind of homogeneous multidimensional array, where each element occupies the same size memory block, and all blocks are interpreted in exactly the same way. How to interpret the element by Tensor :ref:’data type <tensor-dtype>`decisions, each data type represents a type Tensor.

  • We can perform various scientific calculations based on the Tensor data structure;

  • Tensor is also the main data structure used in neural network programming. The input, output and conversion of the network are all represented by Tensor.

Note

The difference with NumPy is that MegEngine also supports the use of GPU devices for more efficient calculations. When both GPU and CPU devices are available, MegEngine will give priority to using GPU as the default computing device, without the user’s manual setting.

See also

If you are not sure how to obtain a Tensor, please refer to How to create a Tensor.

Distinction in the use of concepts (terms)#

Tensor concept that we often mentioned is a summary of other more specific concepts (or promotion), here are some examples:

math

computer science

abstraction

Concrete example

Scalar

Number

point

Score, probability

Vector

Array

String

List

Matrix

2 dimensional array (2d-array)

noodle

Excel spreadsheet

It is common for different research fields to use different terms to describe the same concept. It is easy to be confused if these concepts are not clear.

Python provides the <https://docs.python.org/3/library/array.html>`_, but its usage is different from the NumPy array we mentioned, so we can use Python (nested) list: py:class:list as an analogy. On subsequent pages, we will slowly transition to the actual use and operation of Tensor.

Note:In order to facilitate understanding, we assume here that the data types in the Python list are the same, such as the Number type.

Note

In the field of deep learning, we usually refer to the above concepts collectively as tensors.

Access an element in Tensor#

For a number (or scalar) Tensor, obviously we can get its value directly because it has only one element.

>>> a = 20200325
>>> a
20200325

In other cases, if you want to get an element in the Tensor, you need to provide the integer index (Index) of the corresponding position and use the subscript operator []:

  • Note that:Tensor starts counting based on zero (Zero-based), which is consistent with Python lists/NumPy multidimensional arrays;

  • For example, if we want to get the 3rd element in the vector/array a = [0, 1, 2, 3, 4]'', we need to use ``a[2];

  • For another example, if we want to get the element with the value of 6 in the following 2d-array b, we need to use b[1][2];

>>> b = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> b[1]
[4, 5, 6]
>>> b[1][2]
6

We can understand it as visiting b[1] first, and then treating b[1] as a separate part to access the element with index 2 in[1]

The two-dimensional situation can be analogized to that we \(M\) in the order of the first row and then the column-

\[\begin{split}M = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & \color{blue}{6} \\ 7 & 8 & 9 \\ \end{bmatrix} \quad M_{(1,2)} = 6\end{split}\]

In the case of higher dimensions, it is very unrealistic to use special “scalar”, “vector”, “matrix”… terms to define the structure.

  • Therefore, the concept of n-dimensional tensor is provided in mathematics. Correspondingly, n-dimensional array is provided in NumPy;

  • The n in n-dimensional tensor and n-dimensional array indicates that n index values need to be provided to obtain elements from it.

math

computer science

The number of scalar indexes needed to get the value

Scalar

Number

0

Vector

Array

1

Matrix

2 dimensional array (2d-array)

2

n-dimensional tensor (nd-tensor)

n-dimensional array (nd-array)

n

Now we can forget the above terms and use n to determine the number of Tensor dimensions.

So we can understand:

  • A scalar is a 0-dimensional Tensor;

  • A vector is a 1-dimensional Tensor;

  • A matrix is a 2-dimensional Tensor;

  • An n-dimensional array is an n-dimensional Tensor.

When accessing a specific element of n-dimensional Tensor (assumed to be \(T\)), the following syntax can be used::

\[T_{[i_1][i_2]\ldots [i_n]}\]

That is, we have to provide \(i_1, i_2, \ldots ,i_n\), and each index is reduced by one dimension, and finally a 0-dimensional number (scalar) is obtained.

For example, we learn that the person we are looking for lives in Room 902, Unit 3, Building 23, in a community, so we need to visit court[23][3][9][2];

See also

In fact, there are more efficient indexing methods for Tensor and multi-dimensional arrays, please refer to Index in multiple dimensions.

Note

Tensor in the field of deep learning is actually a multi-dimensional array (N-dimensional array).

Use slices to get some elements#

Earlier we showed how to access a single element. Another common situation is the need to access some elements.

Consistent with Python, we can use the slice (Slicing) operator to access and modify the part of the element object Tensor:

>>> a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[2:8:2]
[2, 4, 6]

Observing the above example, we have performed a slicing operation through the : symbol, the syntax is start:stop:step, which corresponds to the start index, end index and step length. This way of writing actually generates a slice object ``slice(start:stop:step)’’ for us behind the scenes, the two are equivalent to:

>>> myslice = slice(2, 8, 2)
>>> a[myslice]
[2, 4, 6]

Note

  • start, stop, step can also be negative numbers, which means that the index change order is opposite to the default.

  • The index range of start and stop is in the form of [start, stop)'' with left closed and right open, that is, ``a[stop] itself is not within the slice range.

  • In fact, this design corresponds to the zero-based index of the way, there are many benefits of this design is: when only the last location information, we can quickly calculate a few slices and elements within range; the same token use `` stop`` Less Go to start to quickly calculate the length of the slice and interval, which is not easy to confuse; at the same time, we can use a[:i] and a[i:] to obtain the original data segmentation After the two parts do not overlap.

See also

Computer scientist, Professor Edsger W. Dijkstra’s content in “Why numbering should start at zero <https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html>`_ “provides a good explanation for the 0-based subscript and the interval habit of left-closed and right-opened.

Further, part of the slice syntax elements may be omitted:

  • If there is no colon operator in the subscript operator such as a[i], then the single element corresponding to the index position is returned;

  • If there is only one colon operator in the subscript operator, it needs to be judged according to different writing methods.:

    • If it is a[start:], it means that all items from the position of start have been extracted;

    • If it is a[:stop], it means that all items from the stop position forward are extracted;

    • If it is a[start:stop], it means that all items from start to stop will be extracted;

  • If ``step’’ is not specified, all items in the slice range will be extracted by default.

Also supports the use of multidimensional array slice syntax:

>>> b = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> b[0:2]
[[1, 2, 3], [4, 5, 6]]

At this point, it can be understood as a one-dimensional array, and each element in it is a one-dimensional array.:

>>> a1 = [1, 2, 3]
>>> a2 = [4, 5, 6]
>>> a3 = [7, 8, 9]
>>> b = [a1, a2, a3]
>>> b[0:2]
[[1, 2, 3], [4, 5, 6]]
>>> [a1, a2]
[[1, 2, 3], [4, 5, 6]]

We only indexed the outermost layer here, and more complicated situations will be explained :ref:

See also

Use the index can be accessed from Tensor slice of some elements, but there are times when we want to get some elements are not continuous but a combination of several elements in a specific location, then you can use :ref:`array index <array-indexing>’.

Next:Tensor basic attributes#

Through the content of this section, users can grasp the most basic Tensor concepts.

In order to facilitate learning and transition for beginners, in the above code examples, we have been using Python’s list as an example to show the consistency between the MegEngine Tensor data structure and the Python nested list design, but in fact the two There are still some differences.

Let’s give some more examples, please try to guess the output:

Python nested list

>>> c = [[1, 2, 3],
>>>      [4, 5, 6],
>>>      [7, 8, 9]]
>>> c[1, 1]

MegEngine 2-d Tensor

>>> c = Tensor([[1, 2, 3],
>>>             [4, 5, 6],
>>>             [7, 8, 9]])
>>> c[1, 1]

Python nested lists do not support this syntax. Can you guess the effect of using , in the [] operator?

Suppose we now need to extract the elements in the blue part from the following 2-dimensional Tensor, what do we need to do? ( :ref:`construed <multi-dim-slicing>’)

\[\begin{split}M = \begin{bmatrix} 1 & 2 & 3 \\ \color{blue}{4} & \color{blue}{5} & 6 \\ 7 & 8 & 9 \\ \end{bmatrix} \quad M_{(?,? )) = (4 \ 5)\end{split}\]

To answer these questions, you must first understand Tensor’s Rank, Axes and Shape attributes and other related concepts, better understand some of the characteristics of Tensor, and then find the answer from the content of :ref:

See also

Tensor data type

We mentioned that the data type of each element in the Tensor is the same. If you want to know which data types are Tensor, please refer to Tensor data type.

The device where the Tensor is located

The ability to use GPU devices for efficient operations is the advantage of MegEngine over NumPy. To understand the differences between different devices, please refer to The device where the Tensor is located.

Examples of Tensor visualization

If your current concept of Tensor is not intuitive enough, you can refer to Examples of Tensor visualization.

Tensor memory layout

Some experienced developers like to study the underlying details, you can refer to Tensor memory layout.

Python Data API Standard Alliance Association

Many Tensor standard API designs in MegEngine follow the advocacy of the Python Data API Standards Association. Some common implementations are as close to NumPy as possible. For more details, please refer to Consortium for Python Data API Standards.