How do I specify a “toolchain_identifier” when building tensorflow from source












1














I am building tensorflow from source in order to use the GPU version with an older card with a compute capability of 3.0.



When building, I get an error:



ERROR: /home/[user]/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD:57:1: in cc_toolchain rule @local_config_cc//:cc-compiler-k8: Error while selecting cc_toolchain: Toolchain identifier 'local' was not found, valid identifiers are [local_linux, local_darwin, local_windows]


I worked around this by hand editing ~/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD



to change the toolchain_identifier from "local" to "local_linux" under cc_toolchain.



With that change, everything compiles. But, that seems unconventional to me.



Is there something should be specifying elsewhere so that bazel gets the identifier correct on its own?










share|improve this question
























  • Did you file an error log at TensorFlow GitHub repo yet? If not, please do so. I encountered the same issue and your workaround saved me. My Arch Linux (also bazel 0.19) at home compiles without issues including cuda, but our workstation had this stupid toolchain_identifier error with bazel 0.19. Workstation runs Ubuntu with CUDA9.
    – daniel451
    Nov 15 '18 at 19:42
















1














I am building tensorflow from source in order to use the GPU version with an older card with a compute capability of 3.0.



When building, I get an error:



ERROR: /home/[user]/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD:57:1: in cc_toolchain rule @local_config_cc//:cc-compiler-k8: Error while selecting cc_toolchain: Toolchain identifier 'local' was not found, valid identifiers are [local_linux, local_darwin, local_windows]


I worked around this by hand editing ~/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD



to change the toolchain_identifier from "local" to "local_linux" under cc_toolchain.



With that change, everything compiles. But, that seems unconventional to me.



Is there something should be specifying elsewhere so that bazel gets the identifier correct on its own?










share|improve this question
























  • Did you file an error log at TensorFlow GitHub repo yet? If not, please do so. I encountered the same issue and your workaround saved me. My Arch Linux (also bazel 0.19) at home compiles without issues including cuda, but our workstation had this stupid toolchain_identifier error with bazel 0.19. Workstation runs Ubuntu with CUDA9.
    – daniel451
    Nov 15 '18 at 19:42














1












1








1







I am building tensorflow from source in order to use the GPU version with an older card with a compute capability of 3.0.



When building, I get an error:



ERROR: /home/[user]/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD:57:1: in cc_toolchain rule @local_config_cc//:cc-compiler-k8: Error while selecting cc_toolchain: Toolchain identifier 'local' was not found, valid identifiers are [local_linux, local_darwin, local_windows]


I worked around this by hand editing ~/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD



to change the toolchain_identifier from "local" to "local_linux" under cc_toolchain.



With that change, everything compiles. But, that seems unconventional to me.



Is there something should be specifying elsewhere so that bazel gets the identifier correct on its own?










share|improve this question















I am building tensorflow from source in order to use the GPU version with an older card with a compute capability of 3.0.



When building, I get an error:



ERROR: /home/[user]/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD:57:1: in cc_toolchain rule @local_config_cc//:cc-compiler-k8: Error while selecting cc_toolchain: Toolchain identifier 'local' was not found, valid identifiers are [local_linux, local_darwin, local_windows]


I worked around this by hand editing ~/.cache/bazel/_bazel_[user]/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD



to change the toolchain_identifier from "local" to "local_linux" under cc_toolchain.



With that change, everything compiles. But, that seems unconventional to me.



Is there something should be specifying elsewhere so that bazel gets the identifier correct on its own?







tensorflow bazel






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 20:45

























asked Nov 14 '18 at 1:54









JLC

1648




1648












  • Did you file an error log at TensorFlow GitHub repo yet? If not, please do so. I encountered the same issue and your workaround saved me. My Arch Linux (also bazel 0.19) at home compiles without issues including cuda, but our workstation had this stupid toolchain_identifier error with bazel 0.19. Workstation runs Ubuntu with CUDA9.
    – daniel451
    Nov 15 '18 at 19:42


















  • Did you file an error log at TensorFlow GitHub repo yet? If not, please do so. I encountered the same issue and your workaround saved me. My Arch Linux (also bazel 0.19) at home compiles without issues including cuda, but our workstation had this stupid toolchain_identifier error with bazel 0.19. Workstation runs Ubuntu with CUDA9.
    – daniel451
    Nov 15 '18 at 19:42
















Did you file an error log at TensorFlow GitHub repo yet? If not, please do so. I encountered the same issue and your workaround saved me. My Arch Linux (also bazel 0.19) at home compiles without issues including cuda, but our workstation had this stupid toolchain_identifier error with bazel 0.19. Workstation runs Ubuntu with CUDA9.
– daniel451
Nov 15 '18 at 19:42




Did you file an error log at TensorFlow GitHub repo yet? If not, please do so. I encountered the same issue and your workaround saved me. My Arch Linux (also bazel 0.19) at home compiles without issues including cuda, but our workstation had this stupid toolchain_identifier error with bazel 0.19. Workstation runs Ubuntu with CUDA9.
– daniel451
Nov 15 '18 at 19:42












3 Answers
3






active

oldest

votes


















0














Not sure this is related but ... I was having the same problem, tried a bunch of things that did not work, including alternating between clang and gcc, and then told configure I was using cudnn 7.2 instead of just 7 and it worked after that.






share|improve this answer





















  • Thanks! I entered the full point release version (7.3.0.29). I'll see what happens if I enter 7.3.
    – JLC
    Nov 15 '18 at 3:18



















0














open /home/[user]/.cache/bazel/_bazel_jeff/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD with any text editor and change the line 57 as local_linux






share|improve this answer





























    0














    I got the same error in building tensorflow r1.9 for one older Nvidia GPU card. I downgraded the bazel from 0.19.1 to 0.18.1. The error was fixed in compiling.






    share|improve this answer





















      Your Answer






      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "1"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53292093%2fhow-do-i-specify-a-toolchain-identifier-when-building-tensorflow-from-source%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      0














      Not sure this is related but ... I was having the same problem, tried a bunch of things that did not work, including alternating between clang and gcc, and then told configure I was using cudnn 7.2 instead of just 7 and it worked after that.






      share|improve this answer





















      • Thanks! I entered the full point release version (7.3.0.29). I'll see what happens if I enter 7.3.
        – JLC
        Nov 15 '18 at 3:18
















      0














      Not sure this is related but ... I was having the same problem, tried a bunch of things that did not work, including alternating between clang and gcc, and then told configure I was using cudnn 7.2 instead of just 7 and it worked after that.






      share|improve this answer





















      • Thanks! I entered the full point release version (7.3.0.29). I'll see what happens if I enter 7.3.
        – JLC
        Nov 15 '18 at 3:18














      0












      0








      0






      Not sure this is related but ... I was having the same problem, tried a bunch of things that did not work, including alternating between clang and gcc, and then told configure I was using cudnn 7.2 instead of just 7 and it worked after that.






      share|improve this answer












      Not sure this is related but ... I was having the same problem, tried a bunch of things that did not work, including alternating between clang and gcc, and then told configure I was using cudnn 7.2 instead of just 7 and it worked after that.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Nov 14 '18 at 4:04









      JulioBarros

      4112




      4112












      • Thanks! I entered the full point release version (7.3.0.29). I'll see what happens if I enter 7.3.
        – JLC
        Nov 15 '18 at 3:18


















      • Thanks! I entered the full point release version (7.3.0.29). I'll see what happens if I enter 7.3.
        – JLC
        Nov 15 '18 at 3:18
















      Thanks! I entered the full point release version (7.3.0.29). I'll see what happens if I enter 7.3.
      – JLC
      Nov 15 '18 at 3:18




      Thanks! I entered the full point release version (7.3.0.29). I'll see what happens if I enter 7.3.
      – JLC
      Nov 15 '18 at 3:18













      0














      open /home/[user]/.cache/bazel/_bazel_jeff/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD with any text editor and change the line 57 as local_linux






      share|improve this answer


























        0














        open /home/[user]/.cache/bazel/_bazel_jeff/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD with any text editor and change the line 57 as local_linux






        share|improve this answer
























          0












          0








          0






          open /home/[user]/.cache/bazel/_bazel_jeff/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD with any text editor and change the line 57 as local_linux






          share|improve this answer












          open /home/[user]/.cache/bazel/_bazel_jeff/35191c369325bea6db75133a187a58d6/external/local_config_cc/BUILD with any text editor and change the line 57 as local_linux







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 15 '18 at 22:54









          thecomplexitytheorist

          1062




          1062























              0














              I got the same error in building tensorflow r1.9 for one older Nvidia GPU card. I downgraded the bazel from 0.19.1 to 0.18.1. The error was fixed in compiling.






              share|improve this answer


























                0














                I got the same error in building tensorflow r1.9 for one older Nvidia GPU card. I downgraded the bazel from 0.19.1 to 0.18.1. The error was fixed in compiling.






                share|improve this answer
























                  0












                  0








                  0






                  I got the same error in building tensorflow r1.9 for one older Nvidia GPU card. I downgraded the bazel from 0.19.1 to 0.18.1. The error was fixed in compiling.






                  share|improve this answer












                  I got the same error in building tensorflow r1.9 for one older Nvidia GPU card. I downgraded the bazel from 0.19.1 to 0.18.1. The error was fixed in compiling.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 16 '18 at 8:35









                  Laomao

                  1




                  1






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53292093%2fhow-do-i-specify-a-toolchain-identifier-when-building-tensorflow-from-source%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How to change which sound is reproduced for terminal bell?

                      Title Spacing in Bjornstrup Chapter, Removing Chapter Number From Contents

                      Can I use Tabulator js library in my java Spring + Thymeleaf project?