megenginelite.tensor

class LiteTensor(layout=None, device_type=LiteDeviceType.LITE_CPU, device_id=0, is_pinned_host=False, shapes=None, dtype=None, physic_construct=True)[源代码]

Description of a block of data with neccessary information.

参数
  • layout – layout of Tensor

  • device_type – device type of Tensor

  • device_id – device id of Tensor

  • is_pinned_host – when set, the storage memory of the tensor is pinned memory. This is used to Optimize the H2D or D2H memory copy, if the device or layout is not set, when copy form other device(CUDA) tensor, this tensor will be automatically set to pinned tensor

  • shapes – the shape of data

  • dtype – data type

注解

Dims of shape should be less than 8. The supported data type defines at LiteDataType

copy_from(src_tensor)[源代码]

copy memory form the src_tensor

参数

src_tensor – source tensor

property device_id

get device id of the tensor

property device_type

get device type of the tensor

fill_zero()[源代码]

fill the buffer memory with zero

get_ctypes_memory()[源代码]

get the memory of the tensor, return c_void_p of the tensor memory

get_data_by_share()[源代码]
get the data in the tensor, add share the data with a new numpy, and

return the numpy arrray

注解

Be careful, the data in numpy is valid before the tensor memory is

write again, such as LiteNetwok forward next time.

property is_continue

whether the tensor memory is continue

property is_pinned_host

whether the tensor is pinned tensor

property layout
property nbytes

get the length of the meomry in byte

reshape(shape)[源代码]

reshape the tensor with data not change.

参数

shape – target shape

set_data_by_copy(data, data_length=0, layout=None)[源代码]

copy the data to the tensor

参数
  • data – the data to copy to tensor, it should be list, numpy.ndarraya or ctypes with length

  • data_length – length of data in bytes

  • layout – layout of data

set_data_by_share(data, length=0, layout=None)[源代码]

share the data to the tensor

参数

data – the data will shared to the tensor, it should be a numpy.ndarray or ctypes data

share_memory_with(src_tensor)[源代码]
share the same memory with the src_tensor, the self memory will be

freed

参数

src_tensor – the source tensor that will share memory with this tensor

slice(start, end, step=None)[源代码]

slice the tensor with gaven start, end, step

参数
  • start – silce begin index of each dim

  • end – silce end index of each dim

  • step – silce step of each dim

to_numpy()[源代码]

get the buffer of the tensor

update()[源代码]

update the member from C, this will auto used after slice, share

class LiteLayout(shape=None, dtype=None)[源代码]
Description of layout using in Lite. A Lite layout will be totally defined

by shape and data type.

参数
  • shape – the shape of data.

  • dtype – data type.

注解

Dims of shape should be less than 8. The supported data type defines at LiteDataType

实际案例

import numpy as np
layout = LiteLayout([1, 4, 8, 8], LiteDataType.LITE_FLOAT)
assert(layout.shape()) == [1, 4, 8, 8]
assert(layout.dtype()) == LiteDataType.LITE_FLOAT
data_type

Structure/Union member

property dtype
ndim

Structure/Union member

property shapes