lite/tensor.h

class Tensor

warpper of the MegEngine Tensor

Some more things here.

注解

  • 如果tensor的内存是通过 reset 接口设置的,内存应该由用户管理,tensor 不会负责自动释放

  • 如果``device_type`` 或 layout 没有设置,当从其他tensor 拷贝时,device_type``layout``会被设置为拷贝tensor的对应值

  • 如果设置了``is_pinned_host``, 存储tensor的内存将是cuda页锁定内存, 其用于优化 H2D 或 D2H内存拷贝,另外当tensor的device或layout未指定且从device tensor 拷贝生成时,tensor会自动设置为页锁定内存tensor

警告

内存不会直接分配,当使用 :cpp:func:`get_memory_ptr() 接口时,tensor中的内存才会实际分配且在必要时自动释放

构造函数

参数: 设备类型

[in] The desired device type of created Tensor.

参数: 设备ID

[in] The desired device id of created Tensor.

参数: 设备stream ID

[in] The desired stream id of created Tensor on disired device

参数:推理后端

[in] desired backend of created Tensor.

  • LITE_DEFAULT backend is MegEngine

  • LITE_RK_NPU backend is RKNN NPU

参数:是否使用页锁定内存

[in] Whether to use pinned memory.

  • false use nornal memory

  • true use pinned memory[main on CUDA]

参数:layout

[in] The desired layout of created Tensor.

friend class TensorHelper
Tensor()

Default constructor.

Tensor(LiteDeviceType device_type, bool is_pinned_host = false)

Constructor.

Tensor(LiteDeviceType device_type, const Layout &layout, bool is_pinned_host = false)

Constructor.

Tensor(int device_id, LiteDeviceType device_type, const Layout &layout = {}, bool is_pinned_host = false)

Constructor.

Tensor(int device_id, int stream_id, LiteDeviceType device_type, bool is_pinned_host = false)

Constructor.

Tensor(LiteBackend backend, LiteDeviceType device_type = LiteDeviceType::LITE_CPU, int device_id = 0, const Layout &layout = {}, bool is_pinned_host = false)

Constructor.

~Tensor()

Deconstructor.

inline LiteDeviceType get_device_type() const

Get device type of this Tensor.

返回

device type

inline int get_device_id() const

Get device id of this Tensor.

inline Layout get_layout() const

Get layout of this Tensor.

inline bool is_pinned_host() const

whether Tensor is on pinned memory

返回

whether Tensor is on pinned memory

  • false nornal memory

  • true pinned memory

void *get_memory_ptr() const

Get memory address of data of this Tensor.

注解

this function will trigger memory alloc in tensor implement

返回

address pointer

void *get_memory_ptr(const std::vector<size_t> &idx) const

Get the memory with the offset describe in idx of this Tensor.

参数

idx[in] indeces of tensor

返回

address pointer

size_t get_tensor_total_size_in_byte() const

Get capacity of the Tenosr in bytes.

bool is_continue_memory() const

Check whether the memory of tensor is contigous.

void set_layout(const Layout &layout)

set layout to this Tensor

注解

this will change the layout and reallocate memory of the tensor

参数

layout[in] layout that will set into this Tensor

void reset(void *prepared_data, size_t data_length_in_byte)

reset layout with user alloced memory

注解

the memory will not be managed by the lite, later, the user should delete it

参数
  • prepared_data[in] user prepared data pointer

  • data_length_in_byte[in] size of this memory

void reset(void *prepared_data, const Layout &layout)

reset layout with user alloced memory and corresponding layout

注解

the memory will not be managed by the lite, later, the user should delete it

参数
  • prepared_data[in] user prepared data pointer

  • layout[in] desired layout

void reshape(const std::vector<int> &shape)

reshape the tensor with new shape

注解

the data type will keep unchanged

参数

shape[in] target shape

std::shared_ptr<Tensor> slice(const std::vector<size_t> &start, const std::vector<size_t> &end, const std::vector<size_t> &step = {})

get a slice from the origin tensor

注解

if tensor = [[1, 2, 3], [4, 5, 6], [7, 8, 9]], start = {0, 0}, end = {2, 2}, step = {1, 2}. Then result = [[1, 3], [4, 6], [7, 9]]

参数
  • start[in] start idx of each dim

  • end[in] end idx of each dim

  • step[in] step of each dim

返回

ref pointer of a new Tensor

void fill_zero()

memset Tensor with zero

void copy_from(const Tensor &src)

copy data from another tensor

注解

the best way for tensor copy is just set the dst device left layout empty. Layout will be set the same as src when copying

参数

src[in] source tensor

void share_memory_with(const Tensor &src_tensor)

share memory with other tensor

void update_from_implement()

update the menbers from the implement

struct Layout

Description of the way of data organized in a tensor.

公开函数

size_t get_elem_size() const

get number of elements of this Layout

返回

number of elements

bool operator==(const Layout &other) const

compare equality of two layouts

参数

other[in] other layout

返回

result of comparation

  • true this layout is equal to other

  • flase this layout is not equal to other

公开成员

size_t shapes[MAXDIM]

shape of each dim

size_t ndim = 0

actual number of dims

LiteDataType data_type = LiteDataType::LITE_FLOAT

date type

公开静态属性

static constexpr uint32_t MAXDIM = 7

max dims