Skip to Main Content

High-Performance Computing (HPC): Applications and Trends in Computer Science

March 7, 2022

High-performance computing (HPC) is a relatively new application being used across many STEM fields of study. From bioinformatics and genetic research, to running artificial intelligence programs and space flight simulations, any instance where huge amounts of data and complex calculations need to be processed at high speeds is where HPC becomes not only useful, but necessary.

Typical computers cannot handle the amount of “big data” generated by, say, sequencing the human genome, which can produce several terabytes of complex data. This and other types of research often require the more heavy-duty computing seen with HPC. HPC setups are comprised of a system of servers using supercomputers equipped with powerful processors, graphics cards, and memory. According to IBM, these setups can be one million times more powerful than the fastest personal laptop. With HPC, the ability to correctly process large amounts of data is as important as the ability to do so quickly. But this comes at a price, as some trade-off between speed and processing is believed to be unavoidable. However, a team of computer science research students and professors at Massachusetts Institute of Technology (MIT) are now revisiting this issue and have developed a promising solution to this problem. Through their joint effort, they have developed a new programming language written specifically for HPC. And it all comes back to zeros and ones.

“Everything in our language is aimed at producing either a single number or a tensor,” explains MIT PhD student Amanda Liu. This is by using what they call “a tensor language” or “ATL”. Tensors are n-dimensional arrays, which replace the need to use one-dimensional vector objects and two-dimensional matrices, and allow for more complex dimensions to be computed. And while this language optimization already exists in some form as “TensorFlow” in the well-known R and Python software, MIT Assistant Professor of Electrical Engineering & Computer Science Jonathan Raglan-Kelly states that this language has been seen to cause slowdowns and complicate downstream optimizations, “violating the Cheap Gradients Principle”. According to Cornell University, this principle states that “the computational cost of computing the gradient of a scalar-valued function is nearly the same (often within a factor of 5) as that of simply computing the function itself…[and] is of central importance in optimization.” This is due to the way that programs like TensorFlow compute certain original functions against their gradient solutions. Thus, the need for a more specialized HPC language arose.

Liu explains “given that high-performance computing is so resource-intensive, you want to be able to modify, or rewrite, programs into an optimal form in order to speed things up.” This can be accomplished by the added framework toolkit that comes with their ATL language, which shows the ways in which simplified program conversion can be attained. A “proof assistant” is included as well, which expands upon the existing Coq language and helps guarantee that the optimization is correct by performing mathematical proofs.

While this language is still a prototype, there are indications that ATL could be the next avenue in HPC optimization, especially when approaching the more complex issue of cybersecurity in the HPC environment. Some emerging studies show promise in using tensor language decompositions in securing data in the cloud or on Amazon Web Services (AWS), but further research needs to be done in this area.

Want to learn more about our computer science, artificial intelligence, and cybersecurity program offerings? Visit our website to learn more about Capitol Tech’s diverse degree programs, or contact




Bernstein, G., Maria, M., Li, T., MacLaurin, D., and Ragan-Kelly, J. (2020). Di€erentiating A Tensor Language. Arxiv.

Cornell University. (2018). Mathematics > Optimization and Control > Provably Correct Automatic Subdifferentiation for Qualified Programs. Arxiv.,to%20quickly%20obtain%20(high%20dimensional)

IBM. (2022). What is supercomputing technology?

Nadis, S. (7 Feb, 2022). A new programming language for high-performance computers. MIT News.

Ong, J., et. al. (2021). Protecting Big Data Privacy Using Randomized Tensor Network Decomposition and Dispersed Tensor Computation [Experiments, Analyses & Benchmarks]. Arxiv.