I am walking through all the steps required for implementing the operators in SOFIE with the help of an example of Leaky Relu Operator.
The Defination is available at ONNX Documentation here
- Register the Operator in
OperatorList.hxx
located here.
#include "TMVA/ROperator_LeakyRelu.hxx"
- Add the Operator to
CMakeLists.txt
located here.
TMVA/ROperator_LeakyRelu.hxx
- Implementing a new C++ class for the Operator here
#ifndef TMVA_SOFIE_ROPERATOR_LeakyRelu
#define TMVA_SOFIE_ROPERATOR_LeakyRelu
#include "TMVA/SOFIE_common.hxx"
#include "TMVA/ROperator.hxx"
#include "TMVA/RModel.hxx"
#include <sstream>
namespace TMVA{
namespace Experimental{
namespace SOFIE{
template <typename T>
class ROperator_LeakyRelu final : public ROperator
{
private:
/* Attributes*/
float falpha=0.01; //default value
std::string fNX;
std::string fNY;
std::vector<size_t> fShape;
std::string fType;
Here we have 5 attributes
Alpha
which is mentioned in the ONNX defination of Leaky Relu.The default value of alpha is taken as 0.01.fNX
is the input tensor.fNY
is the resultant output tensor.fShape
is the shape of the output tensor which is same as input tensor.fType
is for checking the datatype which is supported by SOFIE or not.
public:
ROperator_LeakyRelu(){}
ROperator_LeakyRelu(float alpha,std::string nameX, std::string nameY):
falpha(alpha),fNX(UTILITY::Clean_name(nameX)), fNY(UTILITY::Clean_name(nameY))
{
if(std::is_same<T, float>::value){
fType = "float";
}
else{
throw
std::runtime_error("TMVA SOFIE Encountered unsupported type parsing a Leaky Relu operator");
}
}
- We declare 2 Constructors here, one is the default constructor and other is parameterised constructor with arguments as
alpha
,fNX
--> Input tensor (string),fNY
--> Output tensor (string). - We check the datatype here , if its float then we assign
ftype
as float, otherwise we throw an exception regarding unsupported type for parsing leaky Relu operator.
std::vector<ETensorType> TypeInference(std::vector<ETensorType> input){
return input;
}
The TypeInference
method is used to return the type of output tensor.
std::vector<std::vector<size_t>> ShapeInference(std::vector<std::vector<size_t>> input){
auto ret = input; //suggest copy to compiler
return ret;
}
The ShapeInference
method is used to return the type of output tensor.
void Initialize(RModel& model){
if (model.CheckIfTensorAlreadyExist(fNX) == false){ //input must be a graph input, or already initialized intermediate tensor
throw std::runtime_error("TMVA SOFIE Leaky Relu Op Input Tensor is not found in model");
}
fShape = model.GetTensorShape(fNX);
model.AddIntermediateTensor(fNY, model.GetTensorType(fNX), fShape);
}
- In
Initialize
method we check whether input is a graph input or already initialized intermediate tensor otherwise we throw an exception error. - We add the output tensor to the onnx graph model. The
AddIntermediateTensor
is defined inRModel.cxx
which takes the arguments as the tensor name, type and shape. So here the output tensor type and shape is same as the input tensor.
std::string Generate(std::string OpName){
OpName = "op_" + OpName;
if (fShape.empty()) {
throw std::runtime_error("TMVA SOFIE Transpose Leaky Relu called to Generate without being initialized first");
}
std::stringstream out;
size_t length = ConvertShapeToLength(fShape);
out << SP << "float " << OpName << "_alpha = " << std::setprecision(std::numeric_limits<float>::max_digits10) << falpha << ";\n";
out << "\n//------ LEAKY RELU\n";
out << SP << "for (int id = 0; id < " << length << " ; id++){\n";
out << SP << SP << "tensor_" << fNY << "[id] = ((tensor_" << fNX << "[id] > 0 )? tensor_" << fNX << "[id] : "<< OpName << "_alpha * tensor_"<< fNX<<"[id]);\n";
out << SP << "}\n";
return out.str();
}
};
}//SOFIE
}//Experimental
}//TMVA
#endif //TMVA_SOFIE_ROPERATOR_LeakyRelu
In the Generate
function we declare the actual defination of operator as mentioned in ONNX documentation.
- First we check whether Shape of operater exists or not, if not then we throw an exception error.
- We convert Shape to length using
ConvertShapeToLength
function defined inSofie_common.cxx
, here we basically multiply all dimensions of tensor and get the length of tensor. - We take the input of
alpha
attribute in float with precision of maximum 10 digits, the default value is 0.01 if value is not provided. - Implement the Leaky-Relu Operator according to following defination
f(x) = alpha * x ------> for x < 0,
f(x) = x ------> for x >= 0
I faced an error regarding the Length which is listed below.
/TMVA/ROperator_LeakyRelu.hxx:69:48: error: use of undeclared identifier 'length'
My mentor Lorenzo Moneta helped me with it and fix the error by including a line here.It was a general error faced for all SOFIE operators not having a weight tensor.
- Declare the function
make_ROperator_LeakyRelu
inRModelParser_ONNX.hxx
. This can be found here.
std::unique_ptr<ROperator> make_ROperator_LeakyRelu(const onnx::NodeProto& nodeproto, const onnx::GraphProto& graphproto, std::unordered_map<std::string, ETensorType>& tensor_type);
Add the operator to unordered map factoryMethodMap
.
{"LeakyRelu", &make_ROperator_LeakyRelu}
- Define the function
make_ROperator_LeakyRelu
inRModelParser_ONNX.cxx
as mentioned here. TheRModelParser.cxx
is mentioned in my earlier blog, how it works! Here is the documentation of the same.
I hope you all have got a generic idea of implementing the ONNX Operators in SOFIE! I will be back with another interesting content soon. Until then, Good bye!
Thanks and Regards, Neel Shah