-
-
Save carlthome/6ae8a570e21069c60708017e3f96c9fd to your computer and use it in GitHub Desktop.
I am trying to use XLA compiler from Tensorflow following your jupyter example
During execution of bazel build I always end up on the following build error:
error: Could not find include file 'tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td' include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td" ^ external/org_tensorflow/tensorflow/compiler/mlir/xla/ir/hlo_ops.td:22:9: error: Unexpected input at top level include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td"
> ERROR: /home/ubuntu/.cache/bazel/_bazel_ubuntu/e5cce820cc082410b4fcc604db349066/external/org_tensorflow/tensorflow/compiler/mlir/xla/BUILD:465:1: Executing genrule @org_tensorflow//tensorflow/compiler/mlir/xla:operator_writer_inc failed (Exit 1)
[6,144 / 7,191] 3 actions running
@org_tensorflow//tensorflow/compiler/xla/client:global_data; 4s local
@org_tensorflow//tensorflow/core/kernels/tensor_forest:resources; 1s local
...//tensorflow/core/kernels:eigen_contraction_kernel_with_mkl; 1s local
external/org_tensorflow/tensorflow/compiler/mlir/xla/ir/hlo_ops.td:22:9: error: Could not find include file 'tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td'
include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td"
^
external/org_tensorflow/tensorflow/compiler/mlir/xla/ir/hlo_ops.td:22:9: error: Unexpected input at top level
include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td"
^
[6,144 / 7,191] 3 actions running
@org_tensorflow//tensorflow/compiler/xla/client:global_data; 4s local
@org_tensorflow//tensorflow/core/kernels/tensor_forest:resources; 1s local
...//tensorflow/core/kernels:eigen_contraction_kernel_with_mkl; 1s local
Target @org_tensorflow//:graph failed to build
[6,147 / 7,191] checking cached actions
Use --verbose_failures to see the command lines of failed build steps.
[6,147 / 7,191] checking cached actions
INFO: Elapsed time: 7903.567s, Critical Path: 204.12s
[6,147 / 7,191] checking cached actions
INFO: 5961 processes: 5961 local.
[6,147 / 7,191] checking cached actions
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
So, it does not find the hlo_ops_base.td file, which of course is present in the path (I checked it)
The first time I have tried this, it worked like a charm.
Afterwards I have executed it again on different machines (also perfect clean VMs on different platforms), but always had the same issue.
I am using:
- bazel 1.1.0,
- tensorflow 1.14 (cpu),
- protobuf 3.0.0,
- python 2.7
Does anyone have any clue on how to solve this? I have tried to search it online and it seems no one else is having this issue...
Thanks, Matteo
did you solve the error ?
no. are you experiencing the same?
I am trying to use XLA compiler from Tensorflow following your jupyter example
During execution of bazel build I always end up on the following build error:
error: Could not find include file 'tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td' include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td" ^ external/org_tensorflow/tensorflow/compiler/mlir/xla/ir/hlo_ops.td:22:9: error: Unexpected input at top level include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td"
> ERROR: /home/ubuntu/.cache/bazel/_bazel_ubuntu/e5cce820cc082410b4fcc604db349066/external/org_tensorflow/tensorflow/compiler/mlir/xla/BUILD:465:1: Executing genrule @org_tensorflow//tensorflow/compiler/mlir/xla:operator_writer_inc failed (Exit 1) [6,144 / 7,191] 3 actions running @org_tensorflow//tensorflow/compiler/xla/client:global_data; 4s local @org_tensorflow//tensorflow/core/kernels/tensor_forest:resources; 1s local ...//tensorflow/core/kernels:eigen_contraction_kernel_with_mkl; 1s local external/org_tensorflow/tensorflow/compiler/mlir/xla/ir/hlo_ops.td:22:9: error: Could not find include file 'tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td' include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td" ^ external/org_tensorflow/tensorflow/compiler/mlir/xla/ir/hlo_ops.td:22:9: error: Unexpected input at top level include "tensorflow/compiler/mlir/xla/ir/hlo_ops_base.td" ^ [6,144 / 7,191] 3 actions running @org_tensorflow//tensorflow/compiler/xla/client:global_data; 4s local @org_tensorflow//tensorflow/core/kernels/tensor_forest:resources; 1s local ...//tensorflow/core/kernels:eigen_contraction_kernel_with_mkl; 1s local Target @org_tensorflow//:graph failed to build [6,147 / 7,191] checking cached actions Use --verbose_failures to see the command lines of failed build steps. [6,147 / 7,191] checking cached actions INFO: Elapsed time: 7903.567s, Critical Path: 204.12s [6,147 / 7,191] checking cached actions INFO: 5961 processes: 5961 local. [6,147 / 7,191] checking cached actions FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully
So, it does not find the hlo_ops_base.td file, which of course is present in the path (I checked it)
The first time I have tried this, it worked like a charm.
Afterwards I have executed it again on different machines (also perfect clean VMs on different platforms), but always had the same issue.
I am using:
- bazel 1.1.0,
- tensorflow 1.14 (cpu),
- protobuf 3.0.0,
- python 2.7
Does anyone have any clue on how to solve this? I have tried to search it online and it seems no one else is having this issue...
Thanks, Matteo
same
Model => 150 ms ± 199 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
XLA binary => 191 ms ± 604 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Why XLA compiled binary is slower than the model itself?
hello,In step0.5, I meet the error , yes: write error,do you know why?
thanks!
/tmp
Cloning into 'tensorflow'...
remote: Enumerating objects: 17781, done.
remote: Counting objects: 100% (17781/17781), done.
remote: Compressing objects: 100% (13483/13483), done.
remote: Total 17781 (delta 5758), reused 9063 (delta 3707), pack-reused 0
Receiving objects: 100% (17781/17781), 43.38 MiB | 382.00 KiB/s, done.
Resolving deltas: 100% (5758/5758), done.
Checking out files: 100% (16970/16970), done.
/tmp/tensorflow
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.24.1 installed.
Please specify the location of python. [Default is /root/anaconda3/envs/py35/bin/python]:
Found possible Python library paths:
/root/anaconda3/envs/py35/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/root/anaconda3/envs/py35/lib/python3.6/site-packages]
Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: No CUDA support will be enabled for TensorFlow.
Do you wish to download a fresh release of clang? (Experimental) [y/N]: Clang will not be downloaded.
Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=noignite # Disable Apache Ignite support.
--config=nokafka # Disable Apache Kafka support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
yes: standard output: Broken pipe
yes: write error