Compiled vs. Interpreted Languages
A codebase that uses a compiled language almost always beats a same-purposed codebase written in interpreted languages, in regards to performance. Of course, no amount of compiler magic will save you from poorly structured or inefficient code. The scale of difference in performance is such that it gets close to a comparison of the size of a car and the size of the Sun. It’s just that vast, and I’m being conservative, like another realm entirely. The reason for this is related to the low-level facilities that compiled languages are better suited to access. Think of technology stacks where each level adds its own execution overhead.
To further explain my point: In the case of interpreted languages, since you always need an interpreter (Just-in-Time compilation, interpretation of code during the execution of instructions) this adds its own overhead each time the code runs whereas compiled languages (Ahead-of-Time compilation, code already in binary before execution) the price of interpreting the code has already been paid by the compiler, for once and for all future runs, so no overhead is ever paid again since the processor already can understand the instructions of the application. Of course, the operating system is always between the program and the hardware and is a layer in itself, but given you use an operating system that’s generally available, I’m pretty sure that has been optimized to have the least amount of impact on performance for end-user software.
So, what’s the point?
There are some less hyped programming languages that, even if you don’t fancy learning them, are what enable you to enjoy the perks of our modern world. These programming languages have unparalleled power and reach for use cases we take for granted. Let me provide some examples:
- C or C++: Most performance-critical systems use C or C++ programming languages. Operating systems, trading systems, embedded systems, robots, and even your web browser. Microsoft’s Windows kernel is mostly written in C (and some Assembly language). Chromium and most of its derivatives are developed in C++. The world’s most popular databases (Oracle, MySQL, SQL Server, and PostgreSQL) are written in these languages too. If you know Assembly language, you can reason about performance even at an atomic level per CPU operation.
- Assembly Language: Often people disregard Assembly language as too obsolete to work with like it was a plague of the ’50s or ’60s, and it’s mostly a misconception. What can become obsolete of Assembly is the instruction set when a new processor architecture is out. Whenever there’s a new processor architecture or a processor offers new features extending the current instruction set, the Assembly language gets extended as well. Short of actually writing in binary code, Assembly language gets you the closest possible to the processor, and thus, unlocks performance never before seen for your software.
- Java and C#: Despite not being a fully compiled language, Java compiles to a bytecode where the Java Virtual Machine (JVM) doesn’t have to do much translation between the intermediate representation of the bytecode and binary instructions. It even has features of Garbage Collection and Multiplatform enablement. C# runs on the .NET Framework and shares most of these features with Java, but they are not related to each other.
- COBOL: Some government and financial institutions still use this programming language today, as it has proven to be rock-solid to still be useful today for mission-critical day-to-day operations. When you make a bank transfer or pay using your debit or credit card, that’s most likely talking to a COBOL-supported mainframe somewhere. Old, but not obsolete, in the sense that it is still used today even if it’s not the primary choice for a new system.
Before jumping on a highly hyped language, remember to consider the alternatives! Our modern world wouldn’t exist if not for these programming languages.