Paradigms in Disguise

If you search the Web for "similarities between functional programming and object-oriented programming", all the links that you get will be discussing the differences instead. This is not an exaggeration. I couldn't find a single article on the topic, as if both practices are outright mutually exclusive. But is that the case?

This post is about a shocking moment I went through which made me realize the similarities between some parts of both functional and object-oriented paradigms. What was even surprising is that an OOP-style language that I designed was functional all along, at least when it comes to inductive types.

Puzzling Inductive Types

Algebraic data types have always looked so mysterious to me. Let's look at an inductive type Exp defined in Haskell, whose purpose is to represent an expression internally inside a programming language interpreter (if that is too much, just imagine you are writing a calculator):

data Exp = Const Int | Plus Exp Exp

So expressive (pun intended). If you are new to functional programming, I'll be surprised if you don't have a thousand questions in your mind -- because that's what I had. This is what the above snippet means: an Exp object can either represent a constant, or an addition between two Exps (subexpressions).

These kinds of type definitions have always felt so elegant to me and they still do. Unfortunately, the wonder and mystery have wearied, but I swear they looked super mysterious once.

The Matching That Unlocked It

Before giving the detailed explanation, let me state it first: functional constructors correspond not to constructors, but subclasses in OOP.

Okay, let me rewind. Why does an inductive type definition look so "impossible" to a hardcore C programmer? Not because they are impossible to program in C. A struct object can have a pointer that points to another object of the same kind, which is the basis for inductive types. Stated more explicitly, this is how one could implement the type Exp in handwritten C (feel free to skip this):

  • An enum ExpType that indicates the type of an expression (CONST, PLUS)
  • A struct Exp with a member of the type ExpType
  • Member n of type int to use when type == CONST
  • Members e1 and e2 of type Exp to use when type == PLUS
  • Make use of union inside the struct to be semantically strong, maybe

So verbose. Maybe it is the fact that functional language processors somehow figure this out all is what made them so mysterious to me once. Being an imperative person, this was irritating. So I kept asking myself the question, "how close can one get to those elegant inductive type definitions with the least amount of verbosity in an imperative language?"

The key was the word constructor. Let's look at the Haskell snippet once again:

data Exp = Const Int | Plus Exp Exp

Const and Plus in the above snippet are called constructors (their names are custom). I tried to map them to OOP. It was first misleading though. A class having two different constructors could be possible in an OOP language, but it doesn't help because both return objects of the same kind. Can we have constructors that return objects of different, but somehow the same kind? Yes, enter subclasses.

The Full Picture

What if Exp is a class, Const and Plus being two subclasses of it? Nothing new there in itself; that is exactly how an OOP programmer would implement the inductive type Exp. But compared to C, that is surprisingly close to the functional style.

To get the full picture, you need to look at how the type is being used in instantiations and matches. Consider the full program that uses Exp to store and evaluate the expression 5 + (7 + 3). Unfortunately, Haskell is too expressive that it is unfair to compare any OOP language against it. So let's switch to Coq, which is also functional:

(* 2023-11-24 *)

Inductive Exp :=
  | Const : nat -> Exp
  | Plus  : Exp -> Exp -> Exp.

Fixpoint eval e :=
	match e with
	| Const n     => n
	| Plus e1 e2  => (eval e1) + (eval e2)

Compute (eval (Plus (Const 5) (Plus (Const 7) (Const 3)))).

Now have a look at the same program in ngg, an OOP-supported language that I've been developing since 2019:

// 2023-11-24

nonfinal class Exp staticrtti; ;

class Const extends Exp takes n int; class Plus extends Exp takes e1, e2 Exp;

fun eval gives int takes e Exp switch finally e case as Const cexp return n\cexp;; case as Plus pexp return sum =eval/[e1\pexp] =eval/[e2\pexp];; ;

fun $main local myexp / new Plus/[new Const/[5], new Plus/[new Const/[7], new Const/[3]]]

=printf/['%d\n', =eval/[myexp]]

return 0 ;

Of course, the ngg code is not as short and sweet as the Coq code, but it shows that OOP can be really close to functional when it comes to inductive types, a nontrivial aspect of functional programming. Also, I want you to know that the "functional look" of some of the language constructs are totally unintentional. For example, the switch construct that helps to branch based on the class of an object was originally intended as a purely OOP thing.

(Note: the class ... takes is an ngg syntax sugar for creating an automatic constructor method, member variables and assignments. This was originally planned to ease OOP in ngg, not to mimic functional programming.)

(One more interesting thing: match happens in Haskell implicitly with you writing multiple functions that take multiple kinds of a type; maybe this could be mimiced with ad-hoc polymorphism in OOP.)

Rust vs ngg

To be honest, I didn't start this thought (then real) experiment with the expression-related program; it started with the popular functional-style implementation of linked lists in Rust:

// 2023-08-24, 2023-10-25

enum List<T> {
	Cons(T, Box<List<T>>),

fn printlist(lst:List<i32>) {
	match lst {
	List::Cons(n, tail) => { println!("{} -> ", n); printlist(*tail) },
	_ => println!("end."),

fn main() {
	let mylist = List::Cons(1, Box::new(List::Cons(2, Box::new(List::Cons(3, Box::new(List::Nil))))));

That was the functional version. Now the OOP version in ngg:

// 2023-10-25
// 2023-11-27: Make use of the class...takes syntax sugar

nonfinal class ListItem

class Cons extends ListItem takes n int, tail ListItem;
class Nil extends ListItem;

fun printlist takes li ListItem
	switch finally li
	case as Cons hd
		=printf/["%d -> ", n\hd]

fun $main
	var hd / new Cons/[1, new Cons/[2, new Cons/[3, new Nil]]]
	return 0

Scratching the Surface?

Please don't take this post as a claim that OOP is functional programming in disguise or vice-versa. For one thing, functions have to be first-class citizens for a language to be considered functional. However, paradigms are not just about core values, but also about the patterns that follow. Recognizing common patterns or counterparts is always interesting, and even useful, because you can utilize the nice features of another paradigm without making a complete switch. I just wanted to document a smiliarity that seemed to be overlooked.

One more important think: although it might look like some functional and OOP code snippets are the "evidence" for the "claims" in this post, it's actually something deeper. If you are interested, ask yourself how you would implement inductive types, subclasses and run-time type information in C, a purely procedural language. Yes, I found this bottom-up, not top-down unlike how it's presented in this post.

Read more from Nandakumar at