the__alchemist 14 days ago

Summary, from someone who uses CUDA on rust in several projects (Computational chemistry and cosmology simulations):

  - This lib has been in an unusable and unmaintained state for years. I.e., to get it working, you need to use specific, several-years-old variants of both rustc, and CUDA.
  - It was recently rebooted. I haven't tried the Github branch, but there isn't a release yet. Has anyone verified if this is working on current Rustc and CUDA yet?
  - The Cudarc library (https://github.com/coreylowman/cudarc) is actively maintained, and works well. It does not, however, let you share host and device data structures; you will [de]serialize as a byte stream, using functions the lib provides. Works on any (within past few years at least) CUDA version and GPU.
I highlight this as a trend I see in software libs, in Rust more than others: The projects that are promoted the most are often not the most practical or well-managed ones. It's not clear from the description, but maybe rust-CUDA intends to allow shared data structures between host and device? That would be nice.
  • gbin 14 days ago

    We observed the same thing here at Copper Robotics where we absolutely need to have good Cuda bindings for our customers and in general the lack thereof has been holding back Rust in robotics for years. Finally with cudarc we have some hope for a stable project that keeps up with the ecosystem. The last interesting question at that point is why Nvidia is not investing in the rust ecosystem?

    • adityamwagh 14 days ago

      I was talking to one person from the CUDA Core Compute Libraries team. They hinted that in the next 5 years, NVIDIA could support Rust as a language to program CUDA GPUs.

      I also read a comment on a post on r/Rust that Rust’s safe nature makes it hard to use it to program GPUs. Don’t know the specifics.

      Let’s see how it happens!

    • pjmlp 14 days ago

      They kind of are, but not in CUDA directly.

      https://github.com/ai-dynamo/dynamo

      > NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.

      > Built in Rust for performance and in Python for extensibility,

      Says right there where they see Rust currently.

      • adityamwagh 10 days ago

        Oh, pretty cool. I didn’t know about this development.

  • efnx 13 days ago

    I’m a rust-GPU maintainer and can say that shared types on host and GPU are definitely intended. We’ve mostly been focused on graphics, but are shifting efforts to more general compute. There’s a lot of work though, and we all have day jobs - we’re looking for help. If you’re interested in helping you should say so at our GitHub.

    • the__alchemist 13 days ago

      What is the intended distinguisher between this and WGPU for graphics? I didn't realize that was a goal; have seen it mostly discussed in context of CUDA. There doesn't have to be, but I'm curious, as the CUDA/GPGPU side of the ecosystem is less developed, while catching up to WGPU may be a tall order. From a skim of its main page, it seems like it may also focus on writing shaders in rust.

      Tangent; What is the intended distinguishes between Rust-CUDA, and Cudarc? Rust shaders with shared data structures I'm guessing is the big one. That would be great! There of course doesn't have to be. More tools to choose from, and that encourages progress from each other.

  • hobofan 14 days ago

    Damn. I transfered ownership over the cudnn and cudnn-sys crates (they are by now almost 10 year old crates that I'm certain nobody ever managed to use them for anything useful) to the maintainers a few years back as it looked to be on a good trajectory, but it seems like they never managed to actually release the crates. Hope that the reboot pulls through!

  • sksxihve 14 days ago

    I think that's true in most newer languages, there's always a rush of libraries once a language starts to get popular, for example Go has lots http client libraries even though it also has an http library in the standard library.

    relevant xkcd, https://xkcd.com/927/

    • pests 14 days ago

      I think this also was in small part due to them (Rob Pike perhaps? Or Brad) live-streaming them creating an http server back in the early days and it was good tutorial fodder.

jjallen 14 days ago

I’ve been using the cudarc crate professionally for a while to write and call cuda from rust. Can highly recommend. You don’t have to use super old rustc versions. Although I haven’t looked exactly what you do need to use recently.

  • the__alchemist 13 days ago

    Works on any recent rust and Cuda version. The maintainer historically adds support for new GPU series and Cuda versions fast.

    • jjallen 13 days ago

      Exactly. I thought it did, just didn't want to claim too much about it. Been a couple of months since I looked at it. I wish things would coalesce around this one.

porphyra 14 days ago

Very cool to see this project get rebooted. I'm hoping it will have the critical mass needed to actually take off. Writing CUDA kernels in C++ is a pain.

In theory, since the NVVM IR is based on LLVM IR, rust in CUDA should be quite doable. In practice, though, of course it is an extreme amount of work.

  • pjmlp 14 days ago

    Unless NVIDIA actually embraces this, it will never be better than the C++, alone given the whole IDE integration, graphical debugging and libraries ecosystem.

    Unless one is prepared to do lots of yak shaving, and who knows, then NVIDIA will actually pay attention, like it has happened with CUDA support for other ecosystems.

    • Asraelite 13 days ago

      Yeah, I would prefer to see this effort being done for AMD GPUs. They're not as powerful, but at least everything is open source and developer friendly. They should be rewarded for that.

      • porphyra 10 days ago

        AMD GPUs actually have fairly powerful hardware. However, despite being open source, calling them "developer friendly" is rather generous. Just look at how TinyCorp struggled with AMD ROCm before resorting to writing a completely sovereign stack.

nuc1e0n 14 days ago

Shouldn't it be called RUDA?

ein0p 13 days ago

For this to take off it has to be supported by Nvidia, at feature parity with C++. Otherwise it'll remain a niche tool, unfortunately. I do hope Nvidia either supports this or rolls out their own alternative. I've been looking to get off the C++ train for years, but stuff like this keeps me there.

shmerl 14 days ago

Looks like a dead end. Why CUDA? There should be some way to use Rust for GPU programming in general fashion, without being tied to Nvidia.

  • kouteiheika 14 days ago

    There's no cross-vendor API which exposes the full power of the hardware. For example, you can use Vulkan to do compute on the GPU, but it doesn't expose all of the features that CUDA exposes, and you need to do the legwork yourself reimplementing all of the well optimized libraries (like e.g. cublas or cudnn) that you get for free with CUDA.

    • shmerl 14 days ago

      Make a compiler that takes Rust and compiles into some IR, then another compiler that compiles that IR into GPU machine code. Then it can work and that's going to be your API (what you developed in Rust).

      That's the whole point of what's missing. Not some wrapper around CUDA.

  • pjmlp 14 days ago

    Because others so far have failed to deliver anything worthwhile using, with the same tooling ecosystem as CUDA.

    • coffeeaddict1 14 days ago

      While I agree, that CUDA is the best in class API for GPU programming, OpenCL, Vulkan compute shaders and Sycl are alternatives that are usable. I'm for example, using compute shaders for writing GPGPU algorithms that work on Mac, AMD, Intel and Nvidia. It works ok. The debugging experience and ecosystem sucks compared to CUDA, but being able to run the algorithms across platforms is a huge advantage over CUDA.

      • keldaris 14 days ago

        How are you writing compute shaders that work on all platforms, including Mac? Are you just writing Vulkan and relying on MoltenVK?

        AFAIK, the only solution that actually works on all major platforms without additional compatibility layers today is OpenCL 1.2 - which also happens to be officially deprecated on MacOS, but still works for now.

        • coffeeaddict1 13 days ago

          Yes, MoltenVK works fine. Alternatively, you can also use WebGPU (there are C++ and Rust native libs) which is a simpler but more limiting API.

        • pjmlp 13 days ago

          And is stuck with C99, versus C++20, Fortran, Julia, Haskell, C#, anything else someone feels like targeting PTX with.

          • keldaris 13 days ago

            Technically, OpenCL can also include inline PTX assembly in kernels (unlike any compute shader API I've ever seen), which is relevant for targeting things like tensor cores. You're absolutely right about the language limitation, though.

            • pjmlp 13 days ago

              At which point why bother, PTX is CUDA.

              • keldaris 13 days ago

                Generally, the reason to bother with this approach is if you have a project that only needs tensor cores in a tiny part of the code and otherwise benefits from the cross platform nature of OpenCL, so you have a mostly shared codebase with a small vendor-specific optimization in a kernel or two. I've been in that situation and do find that approach valuable, but I'll be the first to admit the modern GPGPU landscape is full of unpleasant compromises whichever way you look.

      • pjmlp 14 days ago

        No they aren't, because they lack the polyglot support from CUDA and as you acknowledge the debugging experience and ecosystem sucks.

      • fragmede 14 days ago

        why do you need to run across all those platforms? what's the cost benefit for doing so?

        • coffeeaddict1 13 days ago

          Well it really depends on the kind of work you're doing. My (non-AI) software allows users to run my algorithms on whatever server-side GPU or local device they have. This is a big advantage IMO.

          • fragmede 13 days ago

            interesting! Can you say more about what kind of algorithms your software runs?

            • coffeeaddict1 13 days ago

              My work is primarily about the processing of medical images (which usually are large 3D images). Doing this on the GPU, can be up to 10-20x faster.

              • fragmede 13 days ago

                But what about that wants to be multi-platform instead of picking one an specializing, probably picking up some more optimizations along the way?

    • shmerl 14 days ago

      To deliver, you need to make Rust target the GPU in a general way, like some IR, and then may be compile that into GPU machine code for each GPU architecture specifically.

      So this project is a dead end, because it's them who are these "others" - they are developing it and they are doing it wrong.

      • pjmlp 13 days ago

        Plus IDE support, Nsight level debugging, GPU libraries, yes most likely bound to fail unless NVidia, like it happened with other languages sees enough business value to give an helping hand.

        They are already using Rust in Dynamo, even though the public API is Python.

  • the__alchemist 14 days ago

    CUDA is the easiest-to-use and most popular GPGPU framework. I agree that it's unfortunate there aren't good alternatives! As kouteiheika pointed out, you can use Vulkan (Or OpenCL), but they are not as pleasant.

    • shmerl 14 days ago

      It defeats the purpose. Easy to use should be something in Rust, not CUDA.

      • adastra22 13 days ago

        The purpose is to get shit done.