Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for cross compilation, CUDA, and sanitizers #90

Closed
brian-brt opened this issue Sep 14, 2021 · 18 comments
Closed

Adding support for cross compilation, CUDA, and sanitizers #90

brian-brt opened this issue Sep 14, 2021 · 18 comments
Labels
enhancement New feature or request

Comments

@brian-brt
Copy link

This project has been super helpful for configuring toolchains. I've made some pretty extensive changes, adding support for:

  • Cross compilation (tested from Linux to Linux in both directions between k8 and aarch64, should work between anything LLVM supports)
  • Building CUDA code
  • msan+asan+ubsan

I've been using these for a while, and been happy with them. I'm interested in contributing these features back to this project. I figured I'd open an issue to talk about how you'd like to see them before just creating large PRs.

Looking around, #85 takes a different approach to similar features, which touch all the same code.

Also, All 3 of those things require building up supporting tarballs (sysroots, CUDA SDK, and libcxx respectively). I'm happy to provide directions and/or examples of each along with the code, but they need to be created specific for each target environment. Any preferences on how to approach that?

@rrbutani
Copy link
Collaborator

Hi! I'm not a maintainer of this repo but I did write #85 and I'd love to know more about the changes you made to support cross compilation and/or to see your fork if you're willing to make it public. There are some definite downsides to the approach in #85 and I'm very curious to know how you handle target triples and Bazel platforms and target specific sysroots and some of the other cross compilation pain points.

Also I'm not sure if I'm understanding correctly; did you also add support for aarch64 hosts?

brian-brt added a commit to BlueRiverTechnology/bazel-toolchain that referenced this issue Sep 14, 2021
This is a combination of things I mentioned in
bazel-contrib#90 and hacks for things specific to our
environment. Putting it all in a commit to start a discussion, I'm going
to split out the pieces that make sense later.
@brian-brt
Copy link
Author

I threw up my fork in 8b06fed. As mentioned in the commit message, it's a combination of the things I mentioned here and some hacks that don't make sense to contribute.

Looking through #85, I think my handling of sysroots is pretty similar (use a separate one for each target, specified via a map).

I haven't had any trouble with target triples. I only have a single toolchain for each @platforms//cpu, and I have a platform_mappings to select the target when building, which might simplify things compared to wasm.

Yes, I did add aarch64 support. It's almost line-for-line identical to #79, so I didn't include it in the list. I did a few refactorings to reduce duplication, but in hindsight I'm not sure those helped maintainability so I'll look once #79 is merged.

@rrbutani
Copy link
Collaborator

rrbutani commented Sep 15, 2021

Thanks!

Some misc notes:

edit: sorry, not sure why GitHub decided to expand out those issue links 😕 (edit: oh, it's because they're in bullet points)

@brian-brt
Copy link
Author

Thanks!

Some misc notes:

That's neat, and I agree it's somewhat excessive. I think it's unlikely that a toolchain for some untested platform will work without tweaking, so I'd rather prioritize making it easy to get working vs making it possible for somebody to try and end up with a not-quite-working toolchain.

  • Use unix_cc_toolchain_config.bzl:cc_toolchain_config from @bazel_tools. #75 also adds some plumbing to go fetch sysroots for each target

    • it's questionable whether this should live in this repo; I think it's nice since it saves users from having to hunt down a sysroot that matches the compiler version (if necessary) but I understand that it's maybe a bit much
    • for WASI though I think it's fine given that there's a fairly canonical source for it

Agreed. WASI seems pretty reasonable. For Linux I don't think you can provide one just for the compiler version. Among other things, the sysroot determines the libc version, which can be different for different environments with the same toolchain.

ubsan has lots of compile-time choices around what exactly to enable. asan is kind of tricky to support because you can't use precompiled shared libraries for the most part. msan is even trickier to support because you also need a rebuilt C++ standard library. I suspect @bazel_tools just doesn't want to deal with those because it tries to support a large range of compilers.

I can't think of many references to using sanitizers with bazel. I think it requires making enough decisions to set up that most people don't. rules_go has a reference to msan in features, but it's as deprecated functionality. I also don't trigger that codepath in my usage of rules_go for some reason...

Broadly, I like having the whole toolchain in one function for easier modification to add special things for an environment. However, that does make it a lot less flexible and require additional work to support new features/targets/etc.

I think it's a breadth (support lots of targets with basic toolchains) vs depth (support a few targets with fancy toolchains) tradeoff. I'm not sure which approach makes the most sense for this project, and I'm happy to clean things up a bit and declare my fork permanent if #75 is the way for this project to go. That's the core of my reason for creating this issue because I'm really not sure what the best answer is.

  • for CUDA support: are there reasons to prefer adding support to bazel-toolchain instead of using rules_cuda with using_clang?

The main one is not having an additional toolchain. I don't want any of the auto-configuration, and I want everything to be hermetic, so I'd need to carefully look through rules_cuda and tweak everything. In the end, I'd have something very similar to this project.

Also, I like passing -x cuda to all the compilation actions, instead of having a separate cuda_library macro to call. Clang ignores it for source files without cuda kernels. Having a cuda_library which will happily compile non-CUDA files is confusing.

@rrbutani
Copy link
Collaborator

rrbutani commented Sep 15, 2021

That's neat, and I agree it's somewhat excessive. I think it's unlikely that a toolchain for some untested platform will work without tweaking, so I'd rather prioritize making it easy to get working vs making it possible for somebody to try and end up with a not-quite-working toolchain.

My thinking was that it's still useful to have just a "working"-ish compiler for some bare metal targets (i.e. embedded) but this seems questionable the more I think about it.

I still do think having that kind of mapping logic isn't a bad way to deal with small target triple differences (i.e. wasm32-unknown-none vs wasm32-unknown-wasi, etc.) and is potentially better than expanding out the full matrix of targets if we end up supporting lots of them but that's not of concern yet. Unless it ends up seeming very broken I think it's fine to leave in though.

For Linux I don't think you can provide one just for the compiler version. Among other things, the sysroot determines the libc version, which can be different for different environments with the same toolchain.

By different libc versions do you mean different implementations; i.e. glibc/musl? Or actually different versions; i.e. glibc 2.14 vs. glibc 2.34?

For the former, I've seen that included in target triples sometimes; i.e. x86_64-unknown-linux-musl. My hope was that that makes it possible to make reasonable choices about the sysroot to provide for a target.

For the latter, I'm not sure what we as a toolchain should do about this. Up to now (i.e. when not cross compiling) this toolchain has just used whatever libc/etc is present on the host machine IIUC. I'm tempted to say this is somewhat out of scope for this repo (users who need specific sysroots can provide one; we'll provide what we think is good for the common use case) but I do recognize that this is a real use case. I'm also not very confident about being able to find a good sysroot to use for all targets (WASI has one, macOS targets do too, but it's definitely trickier for i.e. linux targets; musl seems easier to ship but that's potentially contentious; I'd feel better about, for example, defaulting to llvm-libc if that were an option).

Couple of unrelated things:

  • it occurred to me that I have no idea how Bazel or other C/C++ toolchains handle statically linking in libc; is there a feature for this or something (there's static_link_cpp_runtimes for C++)? a mechanism to specify a sysroot independent of the toolchain?
  • more and more it seems appealing to go the Zig route and just compile compiler-rt/libc/libc++ on the fly as needed for the particular target/options/sanitizers/whatever
    • this is not trivial and I'm also not sure if it's even possible to do this in Bazel as is; everytime I try to make a cc_toolchain that has rules_cc dependencies I get cycle errors even if I explicitly use transitions to specify the use of a different cc_toolchain (I think this ultimately has to do with rules_cc being mostly native and the transition to toolchain resolution not being complete)
    • ^ isn't unsurmountable; it's definitely possible to just call the compiler straight from the command line in repo rules and forgo bazel caching, etc. for creating the stdlib and stuff (there's precedent too; I think @bazel_tools does this for compiler feature detection like autotools does)
    • but I think the larger point is that ^ is a fair amount of work and that at that point you might as well ship Zig, probably

ubsan has lots of compile-time choices around what exactly to enable. asan is kind of tricky to support because you can't use precompiled shared libraries for the most part. msan is even trickier to support because you also need a rebuilt C++ standard library. I suspect @bazel_tools just doesn't want to deal with those because it tries to support a large range of compilers.

I can't think of many references to using sanitizers with bazel. I think it requires making enough decisions to set up that most people don't. rules_go has a reference to msan in features, but it's as deprecated functionality. I also don't trigger that codepath in my usage of rules_go for some reason...

Having thought about this a bit more I don't think it's a big deal at all to add "new" features and I think ^ is a compelling argument for why you really do want this stuff to live in a toolchain and not just hacked into some --copts in a .bazelrc. (TensorFlow seems to do the latter but I have no idea how they handle the C++ stdlib/runtime side)

I think it's a breadth (support lots of targets with basic toolchains) vs depth (support a few targets with fancy toolchains) tradeoff.

I think this is a good way to frame it.

I'm not sure which approach makes the most sense for this project, and I'm happy to clean things up a bit and declare my fork permanent if #75 is the way for this project to go. That's the core of my reason for creating this issue because I'm really not sure what the best answer is.

I'm personally more interested in supporting a wide range of targets at the moment but providing ASAN/TSAN/UBSAN is very compelling. These two things don't seem fundamentally incompatible; I think the only real argument for #75 is that leaning on unix_cc_toolchain_config.bzl reduces the maintenance burden but it doesn't seem like all that much extra work.

It's obviously not up to me – I am not a maintainer of this repo – but I'm personally leaning towards redoing #75 to instead update cc_toolchain_config.bzl.tpl to pull in some of the new stuff from upstream (like LTO support).

(other people should definitely weigh in too)

Also, I like passing -x cuda to all the compilation actions, instead of having a separate cuda_library macro to call. Clang ignores it for source files without cuda kernels. Having a cuda_library which will happily compile non-CUDA files is confusing.

Is adding -x cuda and the other flags as --copts viable for your use case?

Flags like these living in a toolchain make me a little nervous but I admittedly don't have a good understanding of the space.

@brian-brt
Copy link
Author

That's neat, and I agree it's somewhat excessive. I think it's unlikely that a toolchain for some untested platform will work without tweaking, so I'd rather prioritize making it easy to get working vs making it possible for somebody to try and end up with a not-quite-working toolchain.

My thinking was that it's still useful to have just a "working"-ish compiler for some bare metal targets (i.e. embedded) but this seems questionable the more I think about it.

I've been thinking about doing this too, but I think it makes more sense as a separate repository rule, possible in a separate project. It could share some of the LLVM download infrastructure, but I'm not sure if it's enough to live in this project.

Bare metal toolchains are set up differently from OS ones. There aren't any kernel headers, it's -ffreestanding, and everything has to be statically linked (including the libc). Also, I think the toolchain has to manage linker scripts for bazel, and I also think that's possible to do it in a reasonably configurable way, but it's tricky. My goal would be "here's a toolchain and a set of rules for building bare metal code, and preconfigured ones for STM32 and Kinetis K". It's rising up towards the top of my list of projects, it might make it to the top eventually...

For Linux I don't think you can provide one just for the compiler version. Among other things, the sysroot determines the libc version, which can be different for different environments with the same toolchain.

By different libc versions do you mean different implementations; i.e. glibc/musl? Or actually different versions; i.e. glibc 2.14 vs. glibc 2.34?

I've run into problems with different versions of Debian and Ubuntu having different versions of glibc. Those are about as similar as two operating systems can get, but there are still situations where compiling against a newer glibc uses symbol versions that an older one doesn't have, or compiling against an older glibc doesn't have a new syscall that you want to use.

For the former, I've seen that included in target triples sometimes; i.e. x86_64-unknown-linux-musl. My hope was that that makes it possible to make reasonable choices about the sysroot to provide for a target.

For the latter, I'm not sure what we as a toolchain should do about this. Up to now (i.e. when not cross compiling) this toolchain has just used whatever libc/etc is present on the host machine IIUC. I'm tempted to say this is somewhat out of scope for this repo (users who need specific sysroots can provide one; we'll provide what we think is good for the common use case) but I do recognize that this is a real use case. I'm also not very confident about being able to find a good sysroot to use for all targets (WASI has one, macOS targets do too, but it's definitely trickier for i.e. linux targets; musl seems easier to ship but that's potentially contentious; I'd feel better about, for example, defaulting to llvm-libc if that were an option).

I think any default besides dynamically linking to glibc isn't going to work for most people. A minimal sysroot with a relatively old version of glibc might work though.

  • it occurred to me that I have no idea how Bazel or other C/C++ toolchains handle statically linking in libc; is there a feature for this or something (there's static_link_cpp_runtimes for C++)? a mechanism to specify a sysroot independent of the toolchain?

Statically linking glibc on a GNU/Linux system (I can't speak for other OS) is a pretty weird thing to do. It causes a variety of "fun" problems, and isn't something I think is worth supporting. https://stackoverflow.com/a/57478728 has some good reasons, for example. Also, I don't see a use case for it. The C++ static library has very contained forward dependencies (just libc and the compiler support library) and poor compatibility among reverse dependencies (all dependents need to use almost the exact same one, period). glibc is the opposite; its forward dependencies include the kernel, dynamic linker, and a surprising amount of /etc, and practically any reverse dependency can be used with a newer version.

Statically linking musl for a Linux binary is an interesting idea. I've never done that, but it looks approachable.

  • more and more it seems appealing to go the Zig route and just compile compiler-rt/libc/libc++ on the fly as needed for the particular target/options/sanitizers/whatever

    • this is not trivial and I'm also not sure if it's even possible to do this in Bazel as is; everytime I try to make a cc_toolchain that has rules_cc dependencies I get cycle errors even if I explicitly use transitions to specify the use of a different cc_toolchain (I think this ultimately has to do with rules_cc being mostly native and the transition to toolchain resolution not being complete)
    • ^ isn't unsurmountable; it's definitely possible to just call the compiler straight from the command line in repo rules and forgo bazel caching, etc. for creating the stdlib and stuff (there's precedent too; I think @bazel_tools does this for compiler feature detection like autotools does)
    • but I think the larger point is that ^ is a fair amount of work and that at that point you might as well ship Zig, probably

That's actually really cool, I hadn't read about it before. Thanks for the pointer :) Also, agreed, that this seems out of scope.

I've got a little script that rebuilds libc++ with msan that I can probably get permission to contribute, which is the only part I've needed. Running it manually hasn't been a big deal for me.

ubsan has lots of compile-time choices around what exactly to enable. asan is kind of tricky to support because you can't use precompiled shared libraries for the most part. msan is even trickier to support because you also need a rebuilt C++ standard library. I suspect @bazel_tools just doesn't want to deal with those because it tries to support a large range of compilers.
I can't think of many references to using sanitizers with bazel. I think it requires making enough decisions to set up that most people don't. rules_go has a reference to msan in features, but it's as deprecated functionality. I also don't trigger that codepath in my usage of rules_go for some reason...

Having thought about this a bit more I don't think it's a big deal at all to add "new" features and I think ^ is a compelling argument for why you really do want this stuff to live in a toolchain and not just hacked into some --copts in a .bazelrc. (TensorFlow seems to do the latter but I have no idea how they handle the C++ stdlib/runtime side)

I'm personally more interested in supporting a wide range of targets at the moment but providing ASAN/TSAN/UBSAN is very compelling. These two things don't seem fundamentally incompatible; I think the only real argument for #75 is that leaning on unix_cc_toolchain_config.bzl reduces the maintenance burden but it doesn't seem like all that much extra work.

Agreed.

Is adding -x cuda and the other flags as --copts viable for your use case?

-x cuda is doable, but I think managing --cuda-path needs to be part of the toolchain. It contains binaries for the compiler to run, so the toolchain needs to get the host versions. It also needs some special handling for absolute/relative/etc paths and getting some libraries+header files into the action inputs, similar to the sysroot.

Flags like these living in a toolchain make me a little nervous but I admittedly don't have a good understanding of the space.

I was planning to pull --cuda-gpu-arch specifically out as an attr on the repository rule (like cuda_gpu_arches = ["sm_35", "sm_72"]).

@rrbutani
Copy link
Collaborator

I've been thinking about doing this too, but I think it makes more sense as a separate repository rule, possible in a separate project. It could share some of the LLVM download infrastructure, but I'm not sure if it's enough to live in this project.

Bare metal toolchains are set up differently from OS ones. There aren't any kernel headers, it's -ffreestanding, and everything has to be statically linked (including the libc). Also, I think the toolchain has to manage linker scripts for bazel, and I also think that's possible to do it in a reasonably configurable way, but it's tricky. My goal would be "here's a toolchain and a set of rules for building bare metal code, and preconfigured ones for STM32 and Kinetis K". It's rising up towards the top of my list of projects, it might make it to the top eventually...

I actually think it's totally reasonable to have this repo handle bare metal targets too; I think you usually can infer whether to link against libc statically for a particular target triple (i.e. x86_64-unknown-uefi or thumbv7em-unknown-none-eabi or really anything with none as the "os" isn't going to be dynamically linked, etc.). For flags I think having users add the ones that are specific to their board seems reasonable; same with the linker script (i.e. the linker script in deps for your top level cc_binary and -T $(location ":linker_script") in linkopts, etc.).

There's definitely more complexity than just that and I think there's probably even less of a consensus about what the "common use case" would be for a lot of the targets in the embedded space but I think there's still value in this repo offering, for example, bare metal toolchains that ship with a sysroot that has a pre-compiled static newlib.

But you're right; this is probably a discussion for later and I think regardless there'd be a legitimate want for, i.e. toolchains that "wrap" this one and have the right SDK/compiler flags/retarget-ed libc for a particular board.

(as a totally unrelated thing, I think there are a lot of potentially neat things to do in this space! I've wanted to build a ruleset that uses QEMU and provides a cc_test like wrapper so that you can run "unit" tests without needing your device; it'd also be very neat to build tooling around something like unity or one of the other test "frameworks" that let you do "on the device" testing)

I think any default besides dynamically linking to glibc isn't going to work for most people. A minimal sysroot with a relatively old version of glibc might work though.

I agree; this seems like a good default.

Do the tarballs you're using for aarch64/x86 linux sysroots use glibc?

Statically linking glibc on a GNU/Linux system (I can't speak for other OS) is a pretty weird thing to do. It causes a variety of "fun" problems, and isn't something I think is worth supporting. https://stackoverflow.com/a/57478728 has some good reasons, for example. Also, I don't see a use case for it. The C++ static library has very contained forward dependencies (just libc and the compiler support library) and poor compatibility among reverse dependencies (all dependents need to use almost the exact same one, period). glibc is the opposite; its forward dependencies include the kernel, dynamic linker, and a surprising amount of /etc, and practically any reverse dependency can be used with a newer version.

It's definitely pretty weird but it's actually a real thing, I'm pretty sure! 😃

But you're absolutely right; when I said statically linked libc I meant more musl than glibc.

The one somewhat legitimate use case I know of for actually doing that (for hosted platforms that is) is totally static binaries for FROM scratch Docker containers. Which is maybe still a bit pathological.

That's actually really cool, I hadn't read about it before. Thanks for the pointer :) Also, agreed, that this seems out of scope.

No problem! It seems super compelling because of how much it simplifies distribution and stuff; it lets you get around needing a huge matrix of sysroots. But also, yeah, it is pretty neat (and out of scope). 😛

I've got a little script that rebuilds libc++ with msan that I can probably get permission to contribute, which is the only part I've needed. Running it manually hasn't been a big deal for me.

👍

I don't know if there are problems with doing so but if we end up supporting MSAN and friends it'd be neat to have the script run in CI and push libc++ builds to the releases page of this repo or something for the repo rule to grab.

-x cuda is doable, but I think managing --cuda-path needs to be part of the toolchain. It contains binaries for the compiler to run, so the toolchain needs to get the host versions. It also needs some special handling for absolute/relative/etc paths and getting some libraries+header files into the action inputs, similar to the sysroot.

Ah, I see.

I think the main hesitation I have is that it complicates the repo rule a little bit (cuda isn't a valid option if you're targeting, say, WASM or something, etc.) but I think it's actually fine; it doesn't really seem like that much more complexity.

I think (afaik) distributing CUDA libraries and friends is a little dicey and something we'd probably need to have users grab themselves? (or maybe not – I know TensorFlow makes you grab them yourself but I'm pretty sure I've seen at least one bazel workspace that does it for you and prints out lots of warnings re: license agreements) Regardless, we should be able to figure something out I think.

I'm also pretty sure I've seen a CUDA ruleset that actually patches rules_cc to inject the right libraries and copts and stuff. But that seems like a strictly worse solution than adding toolchain level support.

I was planning to pull --cuda-gpu-arch specifically out as an attr on the repository rule (like cuda_gpu_arches = ["sm_35", "sm_72"]).

Perfect; thanks.

@brian-brt
Copy link
Author

Do the tarballs you're using for aarch64/x86 linux sysroots use glibc?

Yep. They're built by just merging the data tarballs of .deb files Debian/Ubuntu (I've done it with a few different versions at this point).

I've got a little script that rebuilds libc++ with msan that I can probably get permission to contribute, which is the only part I've needed. Running it manually hasn't been a big deal for me.

I don't know if there are problems with doing so but if we end up supporting MSAN and friends it'd be neat to have the script run in CI and push libc++ builds to the releases page of this repo or something for the repo rule to grab.

It's not super fast, but doing it for releases should be doable.

I think (afaik) distributing CUDA libraries and friends is a little dicey and something we'd probably need to have users grab themselves? (or maybe not – I know TensorFlow makes you grab them yourself but I'm pretty sure I've seen at least one bazel workspace that does it for you and prints out lots of warnings re: license agreements) Regardless, we should be able to figure something out I think.

Yep. My current thinking is to document "download deb files and combine them into a tarball", with an example shell script to run manually. Also, I just found directions from NVIDIA on using Bazel, which say to download the equivalent tarballs directly. Neither of those approaches has any kind of license to accept beforehand, which is interesting. Some of them come with licenses in the tarballs, somebody should look through those for anything relevant when we get there.

@siddharthab
Copy link
Contributor

With the commits I added today, I have refactored the repo to better support cross-compilation. I also added a test to check cross-compilation from darwin-x86_64 to linux-x86_64, with linkstatic (static linked user libraries but dynamically linked system libraries) and fully_static (all libraries are statically linked).

I think I won't be able to come back to this project for a long time. I will update the README with my current thoughts on the state of things.

I will give you two maintainer access to the repo. Please feel free to add new features, refactor, patch, etc. I won't be able to give attention to the PRs, so please go ahead and review within yourself and merge if you feel satisfied.

@brian-brt
Copy link
Author

@rrbutani thoughts on where to take this now? I think it makes sense to start with me pulling out the parts needed for cross compilation on top of the latest changes, and then make a PR with that.

@rrbutani
Copy link
Collaborator

rrbutani commented Oct 7, 2021

@brian-brt In the original post you mentioned wanting to be able to cross-compile from k8 to aarch64 and vice versa (for Linux); are there other configurations you wanted support for? AFAIK this repo supports those configurations after #98.

Ultimately, I'd like to do something a little more general; I've been meaning to redo #85 on top of the recent changes. That said I don't think I'm going to get to that in the next few days so if there are other configurations you need I think it makes perfect sense to add support for those now.

@brian-brt
Copy link
Author

I guess getting libc++ from the sysroot works. My approach gets it from the Clang release for the target architecture, which means less things to configure in the sysroot but more things to download. I'll have to think a bit more about whether it's worth supporting both of them.

@jfirebaugh
Copy link
Contributor

I'm interested in sanitizer support. Is this difficult more difficult to add now that bazel-toolchain uses unix_cc_toolchain_config.bzl?

re there C++/other toolchains that do actually pick up on --feature msan/asan/ubsan?

The default Bazel-generated toolchain on macOS supports asan, tsan, and ubsan features (code). I was actually surprised to learn that unix_cc_toolchain_config.bzl does not.

@fmeum
Copy link
Member

fmeum commented Aug 4, 2022

@jfirebaugh You could submit a Bazel PR to add the same features to the Unix toolchain - I'm pretty sure it would be very welcome.

@oliverlee
Copy link
Contributor

@fmeum What's the right way to test changes to the unix toolchain locally? Doesn't it get grouped into the magic @bazel_tools repository?

For now, I've simply copied the unix toolchain file in order to add and test sanitizer features without needing to build bazel.

@fmeum
Copy link
Member

fmeum commented Dec 28, 2022

@oliverlee Copying and modifying the generated file is the easiest approach I can think of. After you have confirmed your changes work, you can modify the file in a checkout of Bazel and build it with bazel build //src:bazel-dev.

copybara-service bot pushed a commit to bazelbuild/bazel that referenced this issue Jan 11, 2023
There was some discussion here about adding `asan`, `tsan`, and `ubsan` features to the unix toolchains to match macos. bazel-contrib/toolchains_llvm#90 (comment)

I've taken my changes local to that project and copied it into Bazel as suggested by @fmeum. I've written some tests but I'm not sure where to place them or if it makes sense to depend on the error messages from asan/tsan/ubsan.

Closes #17083.

PiperOrigin-RevId: 501213060
Change-Id: I9d973ebe35e4fa2804d2e91df9f700a285f7b404
ShreeM01 added a commit to bazelbuild/bazel that referenced this issue Jan 20, 2023
There was some discussion here about adding `asan`, `tsan`, and `ubsan` features to the unix toolchains to match macos. bazel-contrib/toolchains_llvm#90 (comment)

I've taken my changes local to that project and copied it into Bazel as suggested by @fmeum. I've written some tests but I'm not sure where to place them or if it makes sense to depend on the error messages from asan/tsan/ubsan.

Closes #17083.

PiperOrigin-RevId: 501213060
Change-Id: I9d973ebe35e4fa2804d2e91df9f700a285f7b404

Co-authored-by: Oliver Lee <oliverzlee@gmail.com>
hvadehra pushed a commit to bazelbuild/bazel that referenced this issue Feb 14, 2023
There was some discussion here about adding `asan`, `tsan`, and `ubsan` features to the unix toolchains to match macos. bazel-contrib/toolchains_llvm#90 (comment)

I've taken my changes local to that project and copied it into Bazel as suggested by @fmeum. I've written some tests but I'm not sure where to place them or if it makes sense to depend on the error messages from asan/tsan/ubsan.

Closes #17083.

PiperOrigin-RevId: 501213060
Change-Id: I9d973ebe35e4fa2804d2e91df9f700a285f7b404
@garymm
Copy link
Contributor

garymm commented Jun 13, 2023

I think this issue should be split up or closed.
Sanitizers are supported now, I believe.
The README says cross compilation is supported.
And not sure if CUDA is even desired since rules_cuda exists now.

@siddharthab
Copy link
Contributor

Closing as per above comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants