Searched refs:tensor (Results 1 – 25 of 34) sorted by relevance
12
47 * * 0: An n-D tensor, specifying the tensor to be reshaped82 * Supported tensor rank: up to 4109 * Supported tensor rank: up to 4135 * Pads a tensor.144 * Supported tensor rank: up to 4147 * * 0: An n-D tensor, specifying the tensor to be padded.212 * Given a tensor input, this operation returns a tensor of the same224 * * 0: An n-D tensor, the tensor to be squeezed.258 * * 0: An n-D tensor, specifying the tensor to be sliced.330 * perm tensor.[all …]
364 * * 0: A tensor.1090 * in the input tensor with each element in the output tensor.1238 * * 0: An n-D tensor, specifying the tensor to be normalized.1240 * * 0: A 4-D tensor, specifying the tensor to be normalized.2045 * * 0: A tensor, specifying the tensor to be reshaped.2502 * * 0: An n-D tensor, specifying the tensor to be reshaped2656 * * 0: An n-D tensor, specifying the tensor to be padded.2950 * * 0: An n-D tensor, specifying the tensor to be transposed.3838 * * 0: An n-D tensor, specifying the tensor to be shuffled.4806 * * 0: An n-D tensor, specifying the tensor to be padded.[all …]
333 * of the element type byte size, e.g., a tensor with574 * A tensor operand type with all dimensions specified is "fully576 * known at model construction time), a tensor operand type should be580 * If a tensor operand's type is not fully specified, the dimensions586 * <p>In the following situations, a tensor operand type must be fully594 * model within a compilation. A fully specified tensor operand type602 * not have a fully specified tensor operand type.</li>607 * A fully specified tensor operand type must either be provided613 * A tensor operand type of specified rank but some number of619 * Starting at NNAPI feature level 3, a tensor operand type of unspecified rank is[all …]
51 * A tensor of OEM specific values.
189 * * 0: A tensor.751 * in the input tensor with each element in the output tensor.870 * * 0: An n-D tensor, specifying the tensor to be normalized.1531 * * 0: A tensor, specifying the tensor to be reshaped.1869 * * 0: An n-D tensor, specifying the tensor to be reshaped1979 * * 0: An n-D tensor, specifying the tensor to be padded.2072 * * 0: An n-D tensor, the tensor to be squeezed.2107 * * 0: An n-D tensor, specifying the tensor to be sliced.2794 * * 0: An n-D tensor, specifying the tensor to be shuffled.3613 * * 0: An n-D tensor, specifying the tensor to be padded.[all …]
57 * If the prepared model was prepared from a model wherein all tensor109 * If the prepared model was prepared from a model wherein all tensor152 * If the prepared model was prepared from a model wherein all tensor
133 * and transformed tensor buffers. Any modification to the data cache should
128 * * 0: A tensor.738 * in the input tensor with each element in the output tensor.860 * * 0: An n-D tensor, specifying the tensor to be normalized.1544 * * 0: A tensor, specifying the tensor to be reshaped.1917 * * 0: An n-D tensor, specifying the tensor to be reshaped2039 * * 0: An n-D tensor, specifying the tensor to be padded.2136 * * 0: An n-D tensor, the tensor to be squeezed.2173 * * 0: An n-D tensor, specifying the tensor to be sliced.2969 * * 0: An n-D tensor, specifying the tensor to be shuffled.3829 * * 0: An n-D tensor, specifying the tensor to be padded.[all …]
62 * If the prepared model was prepared from a model wherein all tensor148 * If the prepared model was prepared from a model wherein all tensor
108 * * 0: A tensor.468 * * 0: A tensor.524 * * 0: A tensor.534 * in the input tensor with each element in the output tensor.589 * in the Output tensor. For a miss, the corresponding sub-tensor in647 * * 0: A 4-D tensor, specifying the tensor to be normalized.1097 * * 0: A tensor.1182 * Reshapes a tensor.1184 * Given tensor, this operation returns a tensor that has the same values as1194 * * 0: A tensor, specifying the tensor to be reshaped.[all …]
37 * A tensor of OEM specific values.
52 * If the prepared model was prepared from a model wherein all tensor
20 def convert_to_time_major(tensor, tensor_shape): argument21 return np.array(tensor).reshape(tensor_shape).transpose(30 def reverse_batch_major(tensor, tensor_shape): argument31 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist()33 def split_tensor_in_two(tensor, tensor_shape): argument34 tensor = np.array(tensor).reshape(tensor_shape)35 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
20 def convert_to_time_major(tensor, tensor_shape): argument21 return np.array(tensor).reshape(tensor_shape).transpose([1, 0, 231 def reverse_batch_major(tensor, tensor_shape): argument32 return np.array(tensor).reshape(tensor_shape)[:, ::-1, :].flatten().tolist()35 def split_tensor_in_two(tensor, tensor_shape): argument36 tensor = np.array(tensor).reshape(tensor_shape)37 left, right = np.split(tensor, 2, axis=len(tensor_shape) - 1)
42 def convert_to_time_major(tensor, num_batches, max_time, input_size): argument43 return np.array(tensor).reshape([num_batches, max_time, input_size
39 def convert_to_time_major(tensor, num_batches, max_time, input_size): argument40 return np.array(tensor).reshape([num_batches, max_time,
148 for tensor in op.ins:150 "source": str(tensor),153 for tensor in op.outs:155 "target": str(tensor),
199 … as an internal operand. Will skip if the model does not have any output tensor that is compatible…231 …model to model inputs. Will skip if the model does not have any constant tensor, or if the model h…233 …t as an internal operand. Will skip if the model does not have any input tensor that is compatible…
226 Result setInputTensor(Execution* execution, int tensor, const std::vector<T>& data) { in setInputTensor() argument227 return execution->setInput(tensor, data.data(), sizeof(T) * data.size()); in setInputTensor()230 Result setOutputTensor(Execution* execution, int tensor, std::vector<T>* data) { in setOutputTensor() argument231 return execution->setOutput(tensor, data->data(), sizeof(T) * data->size()); in setOutputTensor()
101 inline bool hasTensor(IOperationExecutionContext* context, const uint32_t tensor) { in hasTensor() argument102 return context->getInputBuffer(tensor) != nullptr; in hasTensor()174 for (const int tensor : requiredTensorInputs) { in prepare() local175 NN_RET_CHECK(!context->isOmittedInput(tensor)) in prepare()176 << "required input " << tensor << " is omitted"; in prepare()
94 inline bool hasTensor(IOperationExecutionContext* context, const uint32_t tensor) { in hasTensor() argument95 return context->getInputBuffer(tensor) != nullptr; in hasTensor()
61 * A custom tensor type.63 * Attached to this tensor is {@link ExampleTensorParams}.76 * * 0: A tensor of {@link EXAMPLE_TENSOR}.
1dictionary=main:en_us,locale=en_US,description=English (US),date ...
1dictionary=main:en_gb,locale=en_GB,description=English (UK),date ...