Why are Permutation Algorithms Important?
Generating permutations is a frequently occurring problem in both coding challenges and real-world problems. Any time you encounter the need to enumerate the arrangements of a set of items, you're considering permutations.
In addition to that, permutation generation algorithms are an important building block in more complicated algorithms. It's important to have a good understanding of the base algorithm in order to use it in more complicated situations. Enumerating all the permutations of a set of $n$ elements takes $O(n!)$ time, but that's impractical for reasonable sizes of $n$. Therefore it's important to be able to reason about permutation generation algorithms to be able to improve the time complexity of any practical algorithm.
Lastly, permutation generation algorithms are a great example of recursion and backtracking. They are a quintessential application of these techniques and demonstrate some of the tradeoffs of different approaches.
What's a Permutation?
A permutation is an arrangement of a set of items. For example
[3,1,2] are all permutations of the set
A set of $n$ items has $n!$ distinct arrangements or permutations. A way to reason about this is to think about how many options there are at each position. To generate a permutation of length $n$, there are $n$ options that could be placed in the first slot, $n-1$ options in the second slot, $n-2$ in the third slot, and so on until the last slot which only has one option. Multiplying these choices together gives $n \times (n-1) \times (n-2) \times ... \times 1 = n!$ total options.
Time and Space Complexity
Generating a list of all $n!$ permutations of a set of $n$ items inherently requires $O(n \times n!)$ space to store all the permutations since there are $n!$ permutations each of length $n$. The time complexity is also $O(n \times n!)$. Therefore, it's probably not practical to enumerate all permutations in a single list. A more useful interface is to use an iterable that yields the next permutation each time it's called.
def permutations(items: Collection) -> Iterable[Tuple]: ...
This still has a time complexity of $O(n \times n!)$ because it returns immutable tuples (copying each tuple takes $O(n)$ time for each of the $n!$ permutations). This could be improved to $O(n!)$ if we were happy to mutate a list internally and keep yielding the same list but the loss of safety and probability of misuse make it unlikely to be worthwhile. Either way, the space complexity drops to $O(n)$, which is the space required to store the current permutation and the call stack. We'll see shortly how a naive approach would require $O(n^2)$ space, and how to avoid it.
The python standard library already includes a number of combinatoric iterators such as
product() and of course
A General Approach
We previously concluded that the number of permutations of a set of $n$ items is $n!$, based on having $n$ choices in the first position, $n-1$ choices in the second position, and so on. This lends itself to a recursive solution when generating the permutations.
$S$ is a set of $n$ items. Every permutation of $S$ begins with one of the values of $S$. Once we pick the first value, there are $n-1$ items left to pick from. These $n-1$ remaining items form a sub-problem. We can then formulate a recurrence relation as such
permutations = starting value + permutations(remaining values) for each starting value. The base case occurs when there are no remaining values, in which case we should return a list with an empty list. That is, there is one permutation of
, which is
Here's a naive way to implement the
def permutations(items: Collection) -> List[Tuple]: if not items: return [()] results =  for start_idx, start_val in enumerate(items): remaining_items = items[:start_idx] + items[start_idx + 1 :] for permutation_of_remaining in permutations(remaining_items): results.append((start_val, *permutation_of_remaining)) return results
This top-down approach has a huge space complexity. If we have a set of n items then the top level call could have up to $n!$ items each of length $n$, then the next call down in the stack could have up to $(n-1)!$ items, each of length n, and so on. This gives a space complexity of $O((n! \times n) + ((n-1)! \times n) + ... + (1 \times n))$ though we note that the first term dominates so the following terms fall away and it becomes $O(n! \times n)$. Even if the final output array is excluded (as is common practice), the space complexity would still be $O((n-1)! \times n) = O(n!)$.
So how can we improve this?
We can improve the space complexity of the approach above by trying to eliminate the internal
results list in every sub-call. One way to do this is to keep a list for our current permutation and update it using backtracking. When a leaf node is encountered (when
items is empty), copy the current permutation to an overall result array.
def permutations(all_items: Collection) -> List[Tuple]: results =  current_permutation =  def find_permutations(items: Collection) -> None: if not items: results.append(tuple(current_permutation)) for i, item in enumerate(items): current_permutation.append(item) find_permutations(items[:i] + items[i+1:]) current_permutation.pop() find_permutations(all_items) return results
But the space complexity is still $O(n! \times n)$ because the
results array contains every permutation. So let's return an iterable instead of a list. Here it's done using a recursive generator and
def permutations(all_items: Collection) -> Iterable[Tuple]: current_permutation =  def find_permutations(items: Collection) -> Iterable[Tuple]: if not items: yield tuple(current_permutation) for i, item in enumerate(items): current_permutation.append(item) yield from find_permutations(items[:i] + items[i+1:]) current_permutation.pop() yield from find_permutations(all_items)
This reduces the space complexity to $O(n^2)$ because we're making a copy of the items at each recursive call to
Again, how can we do better?
The 'Swapping' Method
Often when rearranging arrays, there's an approach that involves swapping elements instead of copying chunks of the array. In the code below, we iterate through the permutations by swapping one element with another, before recursing, then swapping it back. This is equivalent to how the previous method fixed the first value, then passed the remaining elements into the recursive call.
def permutations(all_items: Collection) -> Iterable[Tuple]: items = [*all_items] # make a copy so we don't mutate the original def find_permutations(start_idx: int) -> Iterable[Tuple]: if start_idx == len(items) - 1: # Skip swapping the last item with itself and yield here yield tuple(items) for i in range(start_idx, len(items)): items[start_idx], items[i] = items[i], items[start_idx] yield from find_permutations(start_idx + 1) items[start_idx], items[i] = items[i], items[start_idx] yield from find_permutations(0)
This approach has a space complexity of $O(n)$ for the
items list and for the call stack. Its time complexity is the same as the previous approach $O(n \times n!)$.
Bonus: Bit Manipulation
Another way to achieve $O(n)$ space complexity is to use a bitmask to represent which values have already been seen. Note that this will only have $O(n)$ space complexity up to n=64 on a 64-bit architecture.
def permutations(all_items: Collection) -> Iterable[Tuple]: current_permutation =  complete_mask = (1 << len(all_items)) - 1 def find_permutations(used_mask: int) -> Iterable[Tuple]: if used_mask == complete_mask: yield tuple(current_permutation) for i, item in enumerate(all_items): if used_mask & (1 << i): continue current_permutation.append(item) yield from find_permutations(used_mask | (1 << i)) current_permutation.pop() yield from find_permutations(0)