lite/tensor.h#

class Tensor#

warpper of the MegEngine Tensor

Some more things here.

Note

  • If the tensor memory is set through reset interface, the memory is managed by the user, it will not be freed by the tensor;

  • If the device_type or layout is not set, when copy form other source tensor, its device and layout will be copy form the source tensor;

  • If is_pinned_host is set, the storage memory of the tensor is pinned memory, this is used to Optimize the H2D or D2H memory copy, if the device or layout is not set, when copy form other device(CUDA) tensor, this tensor will be automatically set to pinned tensor.

Warning

The memory is not alloc directly, when call get_memory_ptr the memory will be allocated in tensor implement, which will be deleted automatically.

Constructor

param device_type:

[in] The desired device type of created Tensor.

param device_id:

[in] The desired device id of created Tensor.

param stream_id:

[in] The desired stream id of created Tensor on disired device

param backend:

[in] desired backend of created Tensor.

  • LITE_DEFAULT backend is MegEngine

  • LITE_RK_NPU backend is RKNN NPU

param is_pinned_host:

[in] Whether to use pinned memory.

  • false use nornal memory

  • true use pinned memory[main on CUDA]

param layout:

[in] The desired layout of created Tensor.

friend class TensorHelper
Tensor()#

Default constructor.

Tensor(LiteDeviceType device_type, bool is_pinned_host = false)#

Constructor.

Tensor(LiteDeviceType device_type, const Layout &layout, bool is_pinned_host = false)#

Constructor.

Tensor(int device_id, LiteDeviceType device_type, const Layout &layout = {}, bool is_pinned_host = false)#

Constructor.

Tensor(int device_id, int stream_id, LiteDeviceType device_type, bool is_pinned_host = false)#

Constructor.

Tensor(LiteBackend backend, LiteDeviceType device_type = LiteDeviceType::LITE_CPU, int device_id = 0, const Layout &layout = {}, bool is_pinned_host = false)#

Constructor.

~Tensor()#

Deconstructor.

inline LiteDeviceType get_device_type() const#

Get device type of this Tensor.

Returns:

device type

inline int get_device_id() const#

Get device id of this Tensor.

inline Layout get_layout() const#

Get layout of this Tensor.

inline bool is_pinned_host() const#

whether Tensor is on pinned memory

Returns:

whether Tensor is on pinned memory

  • false nornal memory

  • true pinned memory

void *get_memory_ptr() const#

Get memory address of data of this Tensor.

Note

this function will trigger memory alloc in tensor implement

Returns:

address pointer

void *get_memory_ptr(const std::vector<size_t> &idx) const#

Get the memory with the offset describe in idx of this Tensor.

Parameters:

idx[in] indeces of tensor

Returns:

address pointer

size_t get_tensor_total_size_in_byte() const#

Get capacity of the Tenosr in bytes.

bool is_continue_memory() const#

Check whether the memory of tensor is contigous.

void set_layout(const Layout &layout)#

set layout to this Tensor

Note

this will change the layout and reallocate memory of the tensor

Parameters:

layout[in] layout that will set into this Tensor

void reset(void *prepared_data, size_t data_length_in_byte)#

reset layout with user alloced memory

Note

the memory will not be managed by the lite, later, the user should delete it

Parameters:
  • prepared_data[in] user prepared data pointer

  • data_length_in_byte[in] size of this memory

void reset(void *prepared_data, const Layout &layout)#

reset layout with user alloced memory and corresponding layout

Note

the memory will not be managed by the lite, later, the user should delete it

Parameters:
  • prepared_data[in] user prepared data pointer

  • layout[in] desired layout

void reshape(const std::vector<int> &shape)#

reshape the tensor with new shape

Note

the data type will keep unchanged

Parameters:

shape[in] target shape

std::shared_ptr<Tensor> slice(const std::vector<size_t> &start, const std::vector<size_t> &end, const std::vector<size_t> &step = {})#

get a slice from the origin tensor

Note

if tensor = [[1, 2, 3], [4, 5, 6], [7, 8, 9]], start = {0, 0}, end = {2, 2}, step = {1, 2}. Then result = [[1, 3], [4, 6], [7, 9]]

Parameters:
  • start[in] start idx of each dim

  • end[in] end idx of each dim

  • step[in] step of each dim

Returns:

ref pointer of a new Tensor

void fill_zero()#

memset Tensor with zero

void copy_from(const Tensor &src)#

copy data from another tensor

Note

the best way for tensor copy is just set the dst device left layout empty. Layout will be set the same as src when copying

Parameters:

src[in] source tensor

void share_memory_with(const Tensor &src_tensor)#

share memory with other tensor

void update_from_implement()#

update the menbers from the implement

struct Layout#

Description of the way of data organized in a tensor.

Public Functions

size_t get_elem_size() const#

get number of elements of this Layout

Returns:

number of elements

bool operator==(const Layout &other) const#

compare equality of two layouts

Parameters:

other[in] other layout

Returns:

result of comparation

  • true this layout is equal to other

  • flase this layout is not equal to other

Public Members

size_t shapes[MAXDIM]#

shape of each dim

size_t ndim = 0#

actual number of dims

LiteDataType data_type = LiteDataType::LITE_FLOAT#

date type

Public Static Attributes

static constexpr uint32_t MAXDIM = 7#

max dims