I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA. While I have not used any of these languages since, I have used JAX[1] quite a lot, where the learnings from this course have been quite helpful. Many people will end up writing code for GPUs through different levels of abstraction, but those who are able to reason about the semantics through functional primitives might have an easier time understanding what's happening under the hood.
Chapel got a mention in the 'Related Work' section. I looked at it a few years ago and found it compelling (but I don't do HPC so it was just window watching). What's the HN feedback on Chapel?
@yubblegum: I'm unfairly biased towards Chapel (positively), so won't try to characterize HN's opinion on it. But I did want to note that while Chapel's original and main reason for being is HPC, now that everyone lives in a parallel-computing world, users also benefits from using Chapel in desktop environments where they want to do multicore and/or GPU programming. One such example is covered in this interview with an atmospheric science researcher for whom it has replaced Python as his go-to desktop language: https://chapel-lang.org/blog/posts/7qs-dias/
If you scroll down on the Chapel-lang website, there seems to be a lot of activity happening with this language. There is even going to be a ChapelCon 2025.
Chapel and Lustre (a parallel, distributed file system) from Cray were funded by DARPA’s High Productivity Computing Systems program. This work, along with Fortress, from Sun, were developed explicitly to enable and ‘simplify’ the programming of distributed “supercomputers”. The work and artifacts, along with the published documentation and research is of particularly high quality.
Even if you aren’t involved in HPC I’d say the concepts transfer or provide a great basis for parallel and distributed idioms and methodologies that can be adapted to existing languages or used in development of new languages.
TL;DR - Chapel is cool and if you are interested in the general subject matter (despite a different focus) Fortress, which is discontinued, should also be checked out.
Are these languages pure in the functional sense? E.g. Do they allow/encourage mutation? My understanding is that APL permits mutable state and side effects, but maybe they are rarely used in practice? If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.
> My understanding is that APL permits mutable state and side effects ... If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.
a←'hello'
a[1]←'c'
This does _not_ modify the array in-place. It's actually the same as:
a←'hello'
a←'c'@1⊢a
which is more obviously functional. It is easy to convince yourself of this:
APL arrays are values in the same sense as value types in any functional language. You don't explicitly modify arrays in-place; if they happen to have a refcount of 1 operations may happen in-place as an optimization, but not in a manner which observably alters program behavior.
Futhark, SaC, and Accelerate have purely functional semantics. Futhark has something called "in-place updates" that operationally mutate the given array, but semantically they work as if a new array is created (and are statically guaranteed to work this way by the type system).
I didn't downvote, but ... as someone who used both, this statement seems nonsensical.
APL is mathematical notation that is also executable. It is all about putting a mathematical algorithm in a succinct, terse way.
MATLAB is a clunky Fortran-like language that does simple 2D matrix stuff reasonably terse (though not remotely as terse as APL), and does everything else horribly awkwardly and verbosely.
Modern MATLAB might be comparable to 1960s APL, but original MATLAB was most certainly not, and even modern MATLAB isn't comparable to modern APL (and its successors such as BQN and K)
Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.
D language have excellent support functional and array features with parallel support. On top that not known to others it has high performance native BLAS kind of library with ergonomic and intuitiveness similar to python [1].
[1] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen (2016):
The same holds for Accelerate, and I'm fairly sure also SaC and APL. DaCe even gets a special mention in the paper in section 10.5 stating that they specifically _do_ use BLAS bindings.
"Notice that all the all the languages mentioned depends on the external BLAS library". I didn't notice this 'cause I don't think it's true. For example, it highly implausible that APL[1] would depend on BLAS[2] considering APL predates BLAS by 5-10 years ("developed in the sixties" versus "between 1971 and 1973"). I don't think Futhark uses BLAS either but in modern stupidity, this currently two hour old parent has taken over Google results so it's hard to find references.
I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA. While I have not used any of these languages since, I have used JAX[1] quite a lot, where the learnings from this course have been quite helpful. Many people will end up writing code for GPUs through different levels of abstraction, but those who are able to reason about the semantics through functional primitives might have an easier time understanding what's happening under the hood.
> I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA.
PMPH? :)
I think the intended footnote was accidentally left out. Were you talking about this Python library?
https://docs.jax.dev/en/latest/index.html
There's a JAX for AI/LM too
https://github.com/jax-ml/jax
but yeah no idea which the OP meant
Chapel got a mention in the 'Related Work' section. I looked at it a few years ago and found it compelling (but I don't do HPC so it was just window watching). What's the HN feedback on Chapel?
https://chapel-lang.org/
@yubblegum: I'm unfairly biased towards Chapel (positively), so won't try to characterize HN's opinion on it. But I did want to note that while Chapel's original and main reason for being is HPC, now that everyone lives in a parallel-computing world, users also benefits from using Chapel in desktop environments where they want to do multicore and/or GPU programming. One such example is covered in this interview with an atmospheric science researcher for whom it has replaced Python as his go-to desktop language: https://chapel-lang.org/blog/posts/7qs-dias/
If you scroll down on the Chapel-lang website, there seems to be a lot of activity happening with this language. There is even going to be a ChapelCon 2025.
https://chapel-lang.org/blog/posts/chapelcon25-announcement/
Chapel and Lustre (a parallel, distributed file system) from Cray were funded by DARPA’s High Productivity Computing Systems program. This work, along with Fortress, from Sun, were developed explicitly to enable and ‘simplify’ the programming of distributed “supercomputers”. The work and artifacts, along with the published documentation and research is of particularly high quality.
Even if you aren’t involved in HPC I’d say the concepts transfer or provide a great basis for parallel and distributed idioms and methodologies that can be adapted to existing languages or used in development of new languages.
TL;DR - Chapel is cool and if you are interested in the general subject matter (despite a different focus) Fortress, which is discontinued, should also be checked out.
Are these languages pure in the functional sense? E.g. Do they allow/encourage mutation? My understanding is that APL permits mutable state and side effects, but maybe they are rarely used in practice? If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.
> My understanding is that APL permits mutable state and side effects ... If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.
This does _not_ modify the array in-place. It's actually the same as: which is more obviously functional. It is easy to convince yourself of this: returns 'hellojello' and not 'jellojello'.APL arrays are values in the same sense as value types in any functional language. You don't explicitly modify arrays in-place; if they happen to have a refcount of 1 operations may happen in-place as an optimization, but not in a manner which observably alters program behavior.
Futhark, SaC, and Accelerate have purely functional semantics. Futhark has something called "in-place updates" that operationally mutate the given array, but semantically they work as if a new array is created (and are statically guaranteed to work this way by the type system).
Accelerate is a Haskell library/eDSL.
Matlab supposedly is “portable APL”.
the man who invented MATLAB, Cleve Moler said: [I’ve] always seen MATLAB as “portable APL”. [1]
…why the downvoting?
[1] - https://computinged.wordpress.com/2012/06/14/matlab-and-apl-...
I didn't downvote, but ... as someone who used both, this statement seems nonsensical.
APL is mathematical notation that is also executable. It is all about putting a mathematical algorithm in a succinct, terse way.
MATLAB is a clunky Fortran-like language that does simple 2D matrix stuff reasonably terse (though not remotely as terse as APL), and does everything else horribly awkwardly and verbosely.
Modern MATLAB might be comparable to 1960s APL, but original MATLAB was most certainly not, and even modern MATLAB isn't comparable to modern APL (and its successors such as BQN and K)
Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.
D language have excellent support functional and array features with parallel support. On top that not known to others it has high performance native BLAS kind of library with ergonomic and intuitiveness similar to python [1].
[1] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen (2016):
http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...
> Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.
That's incorrect. Futhark doesn't even have linear algebra primitives---everything has to be done in terms of map/reduce/etc: https://github.com/diku-dk/linalg/blob/master/lib/github.com...
The same holds for Accelerate, and I'm fairly sure also SaC and APL. DaCe even gets a special mention in the paper in section 10.5 stating that they specifically _do_ use BLAS bindings.
"Notice that all the all the languages mentioned depends on the external BLAS library". I didn't notice this 'cause I don't think it's true. For example, it highly implausible that APL[1] would depend on BLAS[2] considering APL predates BLAS by 5-10 years ("developed in the sixties" versus "between 1971 and 1973"). I don't think Futhark uses BLAS either but in modern stupidity, this currently two hour old parent has taken over Google results so it's hard to find references.
[1] https://en.wikipedia.org/wiki/APL_(programming_language)
[2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...