megenginelite.tensor

class LiteTensor(layout=None, device_type=LiteDeviceType.LITE_CPU, device_id=0, is_pinned_host=False, shapes=None, dtype=None)[源代码]

the tensor to hold a block of data

copy_from(src_tensor)[源代码]

copy memory form the src_tensor

property device_id

get device id of the tensor

property device_type

get device of the tensor

fill_zero()[源代码]

fill the buffer memory with zero

get_ctypes_memory()[源代码]

get the memory of the tensor, return c_void_p of the tensor memory

get_data_by_share()[源代码]

get the data in the tensor, add share the data with a new numpy, and return the numpy arrray, be careful, the data in numpy is valid before the tensor memory is write again, such as LiteNetwok forward next time.

property is_continue

whether the tensor memory is continue

property is_pinned_host

whether the tensor is pinned tensor

property layout
property nbytes

get the length of the meomry in byte

reshape(shape)[源代码]

reshape the tensor with data not change, only change the shape :param shape: int arrary of dst_shape

set_data_by_copy(data, data_length=0, layout=None)[源代码]

copy the data to the tensor param data: the data to copy to tensor, it should be list, numpy.ndarraya or ctypes with length

set_data_by_share(data, length=0, layout=None)[源代码]

share the data to the tensor param data: the data will shared to the tensor, it should be a numpy.ndarray or ctypes data

share_memory_with(src_tensor)[源代码]

share the same memory with the src_tensor, the self memory will be freed

slice(start, end, step=None)[源代码]

slice the tensor with gaven start, end, step :param start: silce begin index of each dim :param end: silce end index of each dim :param step: silce step of each dim

to_numpy()[源代码]

get the buffer of the tensor

update()[源代码]

update the member from C, this will auto used after slice, share

class LiteLayout(shape=None, dtype=None)[源代码]

the simple layout description

data_type

Structure/Union member

property dtype
ndim

Structure/Union member

property shapes