Etude on Recursion Elimination

Transformation-based program verification was a very important topic in early years of theory of programming. Great computer scientists contributed to these studies: John McCarthy, Amir Pnueli, Donald Knuth ... Many fascinating examples were examined and resulted in recursion elimination techniques known as tail-recursion and co-recursion. In the paper, we examine just a single example (but new we hope) of recursion elimination via program manipulations and problem analysis. The recursion pattern of the example matches descending dynamic programming but is neither tail-recursion nor corecursion pattern. Also, the example may be considered from different perspectives: as a transformation of a descending dynamic programming to ascending one (with a fixed-size static memory), or as a proof of the functional equivalence between recursive and iterative programs (that can later serve as a casestudy for automatic theorem proving), or just as a fascinating algorithmic puzzle for fun and exercising in algorithm design, analysis, and verification. The article is published in the author’s wording.


McCarthy 91 function
We would like to start with a short story about the McCarthy 91 function that follows (in principle) the corresponding article [21] "From Wikipedia, the free encyclopedia".
The function was introduced in papers published by Zohar Manna, Amir Pnueli and John McCarthy in 1970 [16,15]. These papers represented early developments towards the application of formal methods to program verification. The function has a "complex" recursion pattern (contrasted with simple patterns, such as recurrence, tail-recursion or co-recursion).
Nevertheless all m, n ∈ N (assuming that M 0 = (λn ∈ N.n)). Since definition of M aux matches tail-recursion pattern then the McCarthy 91 function can be computed by an iterative algorithm/program (and even by a very efficient iteration-free algorithm (1)). A formal derivation of an iterative version from the recursive one was given in [20] in 1980 based on the use of continuations.
As the field of Formal Methods advanced, this example appeared repetitively in the research literature. In particular, it is viewed as a "challenge problem" for automated program verification. Donald Knuth generalized the function to include additional parameters [11], formal proofs (using ACL2 theorem prover) that Knuth's generalized function is total can be found in [4,5].

Hull Strength Puzzle
We started with a short story about the McCarthy function because we would like to justify our interest to study of translation of other examples functional/recursive programs into iterative algorithms/programs in general and the following problem 2 that we call in the sequel Hull Strength Puzzle (HSP).
Let us characterize the mechanical stability (strength) of a hull of a mobile phone by an integer h that is equal to the height (in meters) safe for the case to fall down, while height (h + 1) meters is unsafe (i.e. the brick breaks). You have to determine the stability of hulls of a particular kind by dropping them from different levels of a tower of H meters. (One may assume that mechanical stability does not change after a safe fall.) How many times do you need to drop hulls, if you have 2 hulls in the stock? What is the optimal number (of droppings) in this case? 2 The problem formulation is just a literary version of the formulation of the Dropping Bricks Problem used in [18,19], another variant of the problem formulation Egg dropping puzzle can be found in Wikipedia article on Dynamic Programming at https://en.wikipedia.org/wiki/Dynamic_ programming#Egg_dropping_puzzle (accessed September 26, 2018).
Basically, the question to answer is how to compute the optimal number of droppings G H , if the height of the tower is H and you have 2 bricks in the stock.
Our purpose is to prove that the problem is solved by the following simple formula G(H) = arg min n : that can be implemented as a trivial non-recursive function (i.e. with iterative body) G iter (H : N): 1. var n : N; 2. n := 0; 3. while n×(n+1) 2 < H do n := n + 1; With a purpose to get the above formula (2), let us start with a recursive solution for HSP. This problem is an example of optimization problems. Any optimal method to define the mechanical stability should start with some step (command) that prescribes to drop the first phone from some particular (but optimal) level h. Hence the following equality holds for this particular level h: 4. 'max' corresponds to the worst in two cases above.
Since the particular value h is optimal, and optimality means minimality, the above equality transforms to the following one: Besides, we can add one obvious equality G 0 = 0.
Remark that the sequence of integers G 0 , G 1 , ... G H , ... that meet these two equalities is unique since G 0 is defined explicitly, G 1 is defined by G 0 , G 2 is defined by G 0 and G 1 , Hence it is possible to move from the sequence G 0 , G 1 , ... G H , ..., to a function G : N → N that maps every natural x to G x and satisfies the following functional equation for the objective function G: This equation has a unique solution as it follows from the uniqueness of the sequence G 0 , G 1 , ... G H , ... Let us summarize the above discussion as the following proposition.
Moreover we can go further: the equation (3) can be adopted as a recursive definition of a function, i.e. a recursive algorithm presented in a functional pseudo-code.

A Special Case of Dynamic Programming
Dynamic Programming was introduced by Richard Bellman in the 1950s [2] to tackle optimal planning problems. At this time, the noun programming had nothing in common with more recent computer programming and meant planning (compare: linear programming). The adjective dynamic points out that Dynamic Programming is related to a change of state (compare: dynamic logic, dynamic system). Bellman equation is a recursive functional equality for the objective function that expresses the optimal solution at the "current" state in terms of optimal solutions at next (changed) states. It formalizes a so-called Bellman Principle of Optimality: an optimal program (or plan) remains optimal at every stage.
After analysis of Bellman equations for particular problems [6] several versions of a (recursive template for/of ) (descending) dynamic programming were suggested and examined. In the present paper we use the most recent and general one [19]: We consider the template as a recursive program scheme [9,12,17], i.e. a recursive control flow structure with uninterpreted symbols: • G is the main functional symbol representing (after interpretation of base functional and predicate symbol) the objective function G : X → Y for some X and Y ; • p is a basic predicate symbol representing (after interpretation) some known 3 predicate p ⊆ X; • f is a basic functional symbol representing (after interpretation) some known 3 function f : X → Y ; • g is a basic functional symbol representing (after interpretation) some known 3 function g : X × Z * → X for some appropriate Z (with a variable arity n(x) : X → N); • all h i and t i (i ∈ [1..n(x)]) are basic functional symbols representing (after interpretation) some known 3 In the sequel do not make an explicit distinction in notation for symbols and interpreted symbols but just verbal distinction by saying, for example, symbol g and function g. Equation (3) for Hull Strength Puzzle is a particular example of functional equation that matches the recursive template for descending dynamic programming (4). In the case we have: • constant function λx.0 is interpretation for f , • identical function λx.x is interpretation for the arity n, A natural question arises: maybe there exists a standard scheme [9,12,17] (i.e. a flowchart with uninterpreted predicate and functional symbols instead of predicate and functions) that is functionally equivalent to recursive scheme (4)? Unfortunately, in general case the answer is negative according to the following proposition proved by M.S. Paterson and C.T. Hewitt [12,17].
Proposition 2. The following special case of the recursive template of descending dynamic programming is not equivalent to any standard program scheme (with fix-size static memory).
This proposition does not mean that (potentially) unbounded memory (e.g. system stack or dynamic heap) is always required; it just says that for some interpretations of uninterpreted symbols p, f , g and h the size of required memory depends on the input data. But if p, f , g and h are interpreted, it may happen that function F can be computed by an iterative program without unbounded memory. For example, Fibonacci numbers F ib(n) = if (n = 0 or n = 1) then 1 else F ib(n − 2) + F ib(n − 1) matches the pattern of scheme in the above proposition 2, but just three integer variables suffice to compute it by an iterative program.
Thus proposition 2 rules out an opportunity to get iterative solution for Hull Strength Puzzle by specialization [8,10] of a standard program scheme equivalent to the recursive scheme (4). But this proposition does not prohibit existence of an iterative algorithm for HSP that uses interpreted functions and predicates.

Toward Iterative Algorithm
Let us present some (not very formal) derivation of the formula (2) for Hull Strength Puzzle and start with a look at Fig. 1 that depicts an initial part of the graph of G computed according to (3). One can observe that Due to monotonicity and jump properties we have either G(x − y) = y or G(x − y) = (y − 1) ; let us accept the late option and rule out the former (but we can not prove why we may do it): Now, for the technical convenience, let a be (x − y), b (y − 1); then x = (a + b + 1) and (5) and (6) lead to the equality Together with another equality G(0) = 0 it leads to the following equality (that can be proved by induction)

An Optimal Procedure for Mechanical Strength
Formula (8)  To explain the idea of the procedure Strength(H) (where H ∈ N), let us assume that the height of the tower H is exactly the sum of an arithmetic progression n, (n − 1), ... (2), 1. Then the procedure divides the tower onto n layers of heights step 1 = n, step 2 = (n − 1), ... step (n−1) = 2 and step n = 1. (For example, in the left part of Fig. 2 one can see a tower of hight 10 divided on 4 layers of heights 4, 3, 2 and 1.) The first loop in the procedure prescribes to drop the first phone in a sequence (while it is safe) from the (top of) layers at levels n, n + (n − 1) , n + (n − 1) + (n − 2) , ... until it breaks after dropping from the top of some layer k ≥ 1 from the level . . . n + (n − 1) + (n − 2) · · · + (n − (k − 1)) . (In the exercise of the procedure in the right part of Fig. 2 k = 2.) The second loop in the procedure prescribes to use the second phone moving one by one (while the phone is safe) all levels from . . . n + (n − 1) + (n − 2) · · · + 1) to . . . n + (n − 1) + (n − 2) · · · + (n − (k − 2)) of the layer k ≥ 1 (from the top of which the first phone fell down and broke). (In the exercise of the procedure in the right part of Fig. 2 two levels 5 and 6 were examined.) The mechanical strength of the hull is the last level from which the second brick was safely dropped. (In the exercise of the procedure in the right part of Fig. 2 it is level 5.) Remark that last ≤ m, because the first phone can survive all m droppings. Then we have: • m 1 ≤ m, because the first phone can break after the first dropping;  it implies that m = n. Contradiction with the assumption m < n. Thus we prove the following proposition.
Proposition 3. Procedure Stregth implements an optimal (in sense of number of droppings) method to define mechanical strength of bricks using 2 bricks: for any given H ∈ N it defines mechanical strength dropping bricks arg min n : n × (n + 1) 2 ≥ H times at most (and this upper bound is exact).
According to proposition 1, functional equation (3) has unique solution in N N that computes the optimal number of droppings that is sufficient to define the strength. Due to this uniqueness and according to proposition 3, this solution is defined by equality (2.) and G = G iter .

Conclusion: Towards Formal Verification
Let us start with a summary of a contribution of this paper.
• The paper discusses a so-called Hull Strength Puzzle (see subsection 1.2.) and how to eliminate recursion and build an iterative algorithm to solve the problem.
• The problem under study is an instance of so-called learning problem to determine the function in some family that has certain properties by testing (querying) the function several times.
• The recursive solution of the problem is a particular instance of dynamic programming and matches descending dynamic programming template (see subsection 1.3.).
• Unfortunately, the descending dynamic programming template is not equivalent to any fixed standard program scheme (see subsection 1.3.) and hence iterative solution for the problem can not result from a general one by program specialization [8,10].
• Also, the recursive solution matches neither tail-recursion nor recurrent pattern that can be converted into iterative algorithms by well-known techniques [11].
• We derived a candidate for iterative solution for Hull Strength Puzzle by some program manipulations (basically, loop unfolding) and (not-very sound) semantic analysis of the unfolded loop (see subsection 2.1.).
• Finally we give (see subsection 2.2.a round-about (and very much) human-oriented proof of correctness of the iterative algorithm for Hull Strength Puzzle (using an optimal method to define mechanical strength of the bricks).
Some topics for further studies are presented below (from the nearest to that which require more time).
• To prove using a proof-assistance (ACL2 most probably) that iterative and recursive definitions for the function G (see subsection 1.2.) are equivalent.
• To investigate how to generalize the pattern of the recursive function and very particular manipulations used/presented in this paper for recursion elimination in more general cases.
• Investigate methods to find recursive patterns admitting recursion elimination. Maybe, machine learning can help to advance in this direction.
• To design and implement a plugin for some IDE (Integrated Development Environment) that analyses program code to find recursive patterns admitting recursion elimination and eliminates these cases of recursion at object code level.