lite/tensor.h

class lite::Tensor

warpper of the MegEngine Tensor

The memory is not alloc directly, when call get_memory_ptr() the memory will be allocated in tensor implement, which will be deleted automatically

Note: if the tensor memory is set through reset() interface, the memory is managed by the user, it will not be freed by the tensor

If the device or layout is not set, when copy form other source tensor, its device and layout will be copy form the source tensor

if is_pinned_host is set, the storage memory of the tensor is pinned memory, this is used to Optimize the H2D or D2H memory copy, if the device or layout is not set, when copy form other device(CUDA) tensor, this tensor will be automatically set to pinned tensor

Public Functions

Tensor()
Tensor(LiteDeviceType device_type, bool is_pinned_host = false)
Tensor(LiteDeviceType device_type, const Layout &layout, bool is_pinned_host = false)
Tensor(int device_id, LiteDeviceType device_type, const Layout &layout = {}, bool is_pinned_host = false)
Tensor(int device_id, int stream_id, LiteDeviceType device_type, bool is_pinned_host = false)
Tensor(LiteBackend backend, LiteDeviceType device_type = LiteDeviceType::LITE_CPU, int device_id = 0, const Layout &layout = {}, bool is_pinned_host = false)
~Tensor()
inline LiteDeviceType get_device_type() const
inline int get_device_id() const
inline Layout get_layout() const
inline bool is_pinned_host() const
void set_layout(const Layout &layout)

set layout will change the layout and reallocate memory of the tensor

void *get_memory_ptr() const

which will trigger memory alloc in tensor implement

void *get_memory_ptr(const std::vector<size_t> &idx) const

get the memory with the offset describe in idx

size_t get_tensor_total_size_in_byte() const

get the tensor capacity in byte

void reset(void *prepared_data, size_t data_length_in_byte)

use the user allocated data to reset the memory of the tensor, the memory will not be managed by the lite, later, the user should delete it.

void reset(void *prepared_data, const Layout &layout)

use the user allocated data and corresponding layout to reset the data and layout of the tensor, the memory will not be managed by lite, later, the user should delete it.

void reshape(const std::vector<int> &shape)

reshape the tensor with new shape, keep the data_type the same

std::shared_ptr<Tensor> slice(const std::vector<size_t> &start, const std::vector<size_t> &end, const std::vector<size_t> &step = {})

get a new tensor slice from the origin tensor

void fill_zero()

set the tensor memory with zero

void copy_from(const Tensor &src)

copy tensor form other tensor Note: the best way for tensor copy is just set the dst device, left layout empty, when copying the dst layout will be set the same with src

void share_memory_with(const Tensor &src_tensor)

share memory with other tensor

bool is_continue_memory() const

whether the memory of tensor is continue

void update_from_implement()

update the menbers from the implement

Friends

friend class TensorHelper
struct lite::Layout

the simple layout description

Public Functions

size_t get_elem_size() const

get the total byte of a layout

bool operator==(const Layout &other) const

compare whether the two layout is equal

Public Members

size_t shapes[MAXDIM]
size_t ndim = 0
LiteDataType data_type = LiteDataType::LITE_FLOAT

Public Static Attributes

static constexpr uint32_t MAXDIM = 7