The “single return” (aka “only return once”) style that many programmers use is almost always inferior to a “return early, return often” style.

There are many programming conventions and styles that are touted as being “best practices” by some but which are actually based on an incomplete or flawed mental model of how to maximize one’s ability to reason about code as effectively as possible. Following programming style conventions without really understanding them is often quite harmful. Truly good code style requires a basis in logical reasoning, not in mere imitation of what other people arbitrarily say to do.

The popular (but harmful) belief that some programmers have that there “should only be one return statement in each function” is one of the best examples of a “best practice” that is actually nearly always counterproductive rather than helpful. In this case, a “single return” policy would ironically be best described as a “worst practice” instead of as “best practice”, given that this particular style convention is truly one of the most consistently harmful styles to code readability and quality.

There are much worse things you can do in code style (etc) of course, but I’m saying that the “single return” style is one of the most consistent methods of damaging code readability and quality at least slightly but often significantly.

Why though? What’s so bad about it and why is “return early, return often” better? That’s what I’ll briefly discuss here. I wanted to write this article because I’ve noticed what seems like a possible increase in “single return” style lately on the internet even though it is objectively a much worse style most of the time, so I’m trying to do my part to counteract that some.

Here’s a (C family style) pseudo-code example of what “single return” (aka “only return once”) code often looks like:

bool DoSomething(...) {
	bool operationSucceeded = true;

	if (FailureCondition1()) {
		HandleFailureCondition1();
		operationSucceeded = false;
	}
	else {
		DoSomething_Part1();

		if (FailureCondition2()) {
			HandleFailureCondition2();
			operationSucceeded = false;
		}
		else {
			DoSomething_Part2();

			if (FailureCondition3()) {
				HandleFailureCondition3();
				operationSucceeded = false;
			}
			else {
				DoSomething_Part3();

				if (FailureCondition4()) {
					HandleFailureCondition4();
					operationSucceeded = false;
				}
				else {
					DoSomething_Part4();
					operationSucceeded = true;
				}
			}
		}
	}
	
	return operationSucceeded;
}

Intimidating looking, right? You have to read the above code very carefully to be sure of what exactly it is doing. You’re forced to read everything with a fine-toothed comb and mentally trace through many levels of nesting and must constantly monitor for any potential unexpected places that operationSucceeded could be changing.

We cannot assume that just because operationSucceeded = true appears somewhere that that will be what is ultimately returned. It’s incidentally true in this example that if any instance of operationSucceeded = true is reached then true will be returned, but more generally it’s impossible for any reader of the code to know that in advance.

Side Note: Sometimes people may think they can get away with not reading every line of code in these cases and still understand the code, but that’s really just sloppy thinking and reflects an insufficient appreciation for the dangers of mutability. If we want to guarantee correctness, then mutable state forces us to at least subconsciously constantly check that none of the m mutable variables that are currently in scope are changing, and we must do this check on every single subsequent line. In this way, mutable variables can easily add massive mental overhead to even small segments of code. Each mutable variable present in a piece of code adds O(n) extra cognitive load by requiring all n subsequent lines of code to be constantly checked against that variable’s potential change.

In stark contrast, here’s what the code looks like when it is rewritten to use a “return early, return often” style using guard clauses:

bool DoSomething(...) {
	if (FailureCondition1()) {
		HandleFailureCondition1();
		return false;
	}
	DoSomething_Part1();

	if (FailureCondition2()) {
		HandleFailureCondition2();
		return false;
	}
	DoSomething_Part2();
	
	if (FailureCondition3()) {
		HandleFailureCondition3();
		return false;
	}
	DoSomething_Part3();
	
	if (FailureCondition4()) {
		HandleFailureCondition4();
		return false;
	}
	DoSomething_Part4();
	return true;
}

Much easier to read and to reason about, right?

Notice how much more foolproof this version is and how there is no longer any need to mentally track any mutable local variables. This implementation is much more functionally pure, expressive, and communicative than the “single return” version was.

Notice also that (contrary to the beliefs of people who follow a “single return” style) the number of paths through the code that you have to consider is actually not greater when using a “return early, return often” style than when using a “single return” style, even though that’s supposed to be a selling point of using a “single return” style. The branching control flow here is actually much easier to understand, not harder.

The number of possible branches of the “return early, return often” style is actually always equal to or less than that of the “single return” style. Leaving a path unreturned for a longer time than needed and continuing to run the remaining code (even though that code is now irrelevant) could only ever increase the number of code paths and interactions that must be considered.

Thus, the main premise/principle of the “advantage” of a “single return” style is therefore fundamentally wrong and is entirely backwards in reality. The “single return” style doesn’t ever simplify anything really, except for manual memory/resource management (but I’ll talk a bit more about that point later).

In fact, a “single return” style can even in some cases (depending on the structure of the code) cause the number of paths to needlessly grow exponentially (e.g. imagine multiple independent if-statements nested inside each other, like a bunch of forked paths branching off from other forked paths like a tree structure (etc), hence the combinatorial explosion). In contrast, a “return early, return often” style makes it much easier (and much more likely) to keep the number of paths growing linearly and to keep performance high, redundant condition checks low, and general structural complexity low.

Depending on the behavior of the code in our example, we may even be able to simplify the code even more, by writing the code like so:

bool DoSomething(...) {
	if (FailureCondition1()) {
		HandleFailureCondition1();
		return false;
	}
	else if (FailureCondition2()) {
		HandleFailureCondition2();
		return false;
	}
	else if (FailureCondition3()) {
		HandleFailureCondition3();
		return false;
	}
	else if (FailureCondition4()) {
		HandleFailureCondition4();
		return false;
	}
	
	DoSomething_Part1();
	DoSomething_Part2();
	DoSomething_Part3();
	DoSomething_Part4();
	return true;
}

Again though, it’s important to realize that this second even simpler form of the “return early, return often” code is only possible if the nature of the specific code is such that the behavior remains the same (or if not the same then at least equally/more desirable than the interleaved form).

Also, to those of you who may be tempted to think that the multiple instances of return false are “redundant” or “duplicated code”: You’re mostly wrong. It really depends on how you look at it. What “duplication” is is actually highly subjective, despite popular rigid-minded “best practices” advice to the contrary. And, in this case, I’d argue it’s better to think of these returns as not being duplication.

Let me explain.

Just because code looks duplicated doesn’t mean it actually is from a logical standpoint. If I have a store that sells apples and oranges and they both cost $5, then does that mean that two separate functions that both return 5 for the price represent code duplication? No, certainly not. Those are two logically distinct cases. The fact that they currently have the same value is coincidental. The exact same kind of thing is happening in our example code. Each if-block is (in the general case) a logically distinct case.

Programmers who use a “single return” style sometimes may cite these “redundant value returns” like these as if they constitute duplicated code and hence a violation of the “don’t repeat yourself” principle. They’re essentially wrong though.

This is what happens when you apply “best practices” to code without actually understanding the underlying logical structure of what you’re talking about. You just end up with rigid ideological habits that really do nothing but harm the quality of your code. Just because an idea is popular with programmers (or with any other group of people) doesn’t make it logically correct. The emotional feeling of certainty is not the same thing as real logical certainty. Superficial “best practices” are often counterproductive.

Within a language with the kinds of limitations that C, C++, C#, Java, etc (the C family) have, there aren’t much redundancies in the above code as far as we can discern from what is given here (i.e. depending on the behavior of the function calls, etc). A language with better control structures may be able to reduce how much you’d need to write here though. Also, we admittedly could use C/C++ macros. Yet, even in that case, it would still be true that these return values are (in the general case) just coincidentally the same for all we know. That’s the proper logical perspective on it.

Anyway though, let’s go in a different direction with this discussion now and let’s talk about where the “single return” style may have originated from and what are the small minority of cases where it has some potential use or value.

This is just according to my own current understanding and personal theory. It may be wrong, so treat it skeptically, as an estimate or merely a possibility.

There are two main (possibly independent) sources of where the “single return” style may have originated from or been popularized from:

Theory #1: Edsger Dijkstra‘s (of Dijkstra’s algorithm fame) advice that one should not have/allow “multiple returns” being misinterpreted by later programmers and then propagated as an ideology thereafter…

In Dijkstra’s time, it was possible in programming languages to return to a different location than where a function was originally called from, and this (from what I heard/read years ago) is what he was referring to when he advocated against the use of “multiple returns”.

He didn’t say not to return from multiple locations in a function. He said to not return to different locations than where a function was originally called from. Modern languages don’t even allow that kind of return normally anymore usually, since it amounts to an especially dangerous form of unrestricted goto. Most function callers expect a function call to return back to where it was called from, rather than to somewhere else.

It’s possible that this advice inadvertently set in motion a misinterpretation that became a “best practice” (which is actually a “worst practice” as we’ve shown in the discussion above) of following a “single return” style.

People who follow a “single return” style sometimes even claim that returning from multiple different locations in a function is “pretty much the same as a goto” even though it really isn’t. An early return is just normal control flow, since a function returning is an entirely expected event. Early returns cannot create the kind of spaghetti code that goto statements can. It’s impossible.

The fact that this objection to multiple returns as them being “pretty much the same as a goto” exists commonly among people who follow a “single return” style is evidence that it may indeed have originated from a misinterpretation of Dijkstra’s advice against violations of function calling conventions.

Why else would users of this style refer to multiple returns as being “like goto” when analysis shows that return clearly isn’t (in terms of making things harder to reason about), and yet coincidentally goto abuse was exactly what Dijkstra was actually arguing against? Add in a telephone game of generations of programmers passing on this horrible “best practice” to each other, and there you go. That would explain why the style exists.

Theory #2: Use of a “single return” style when writing C or C++ code (or any other language that has to manage memory/resources manually sometimes) can make it easier to not accidentally forget to free the memory/resources that you are using along all possible paths correctly.

Programmers who work with C or C++ (or similar languages) may thus have spread/popularized the “single return” style to other languages where manual memory/resource management is no longer relevant or is a much diminished concern.

Basically, the idea here is that since in C or C++ (etc) when you claim some memory or a resource in a scope you have to remember to free it, then consequentially you also have to make sure every possible path out of every function always has calls to freeing all the relevant memory or resources prepended to every early return you make.

This is easy to forget to do, and hence some C or C++ (etc) programmers will adopt the policy of only returning once per function and keeping as much of the freeing at the site of that one return statement as possible, thus reducing the chances of forgetting to free the memory/resources.

I don’t personally like doing “single return” style just to avoid having to handle freeing of memory/resources as often. The main problem with that approach is that even if you do adopt that convention then there will still be cases where it won’t work.

There are cases where you can’t or shouldn’t do all of the freeing of memory/resources in one spot. Some paths will need to have different management performed. Trying to force that onto one function exit path is thus a bug waiting to happen in that sense.

My personal opinion for C and C++ (etc) in this respect is that one should try to use automatic RAII memory management wherever possible and desirable, but then to otherwise just bite the bullet and accept that when you deal with manual memory/resource management that you have to be careful and free it correctly on each possible path in the optimal and correct way. In the general case there’s just no way around that.

Finally, I want to mention a few general principles of good code design that are related to the above discussion:

  1. Code that looks like a flock of birds trying to migrate east across the page of your source code (aka a “pyramid of doom“) can pretty much always be refactored into a vastly more readable and less structurally complex (and sometimes even less computationally expensive) form by the proper utilization of guard clauses and a strongly “return early, return often” kind of coding style. The “single return” style in contrast should almost never be used. The “single return style” literally does the opposite of what advocates claim it does. It makes the paths harder to reason about, almost always, and also often increases redundancies (e.g. having to check the same conditions multiple times in multiple contexts, etc).
  2. Mutable variables cast a cognitive shadow over all code that subsequently follows their declaration. For every new mutable variable you add to any piece of code, to ensure correctness you must now constantly consider whether or not each variable is changing on each line of the subsequent code, whether consciously or subconsciously, or else you must accept a greater abundance and magnitude of bugs. The greater the spread/reach of a mutable variable, the more difficult to reason about any block of code it touches will become.
  3. It is best to think of code in terms of aggressively establishing and enforcing invariants if you want your code to be maximally easy to read and to reason about. The tighter the domain you can restrict mutable state and other dependencies to, within pragmatism, the higher quality your code will tend to be. It’s also ultimately much easier to work with that way. The rushed/sloppy way, in contrast, tends to make things actually take longer in the long term. For example, it’s usually best to handle error cases and other special cases as early as possible in the control flow and eliminate them cleanly, so that the cognitive shadow they cast upon the rest of the code is minimized.

PS: I propose that “bird code” be another synonym for “pyramid of doom” code henceforth. Also, I like the term “cognitive shadow” as well. It seems like a fitting and evocative label for what happens when you have mutable variables and/or not-yet-handled errors or special cases looming over the remainder of the code and is illustrative of how that tends to degrade the quality and/or intelligibility of all subsequent code until handled properly.