Dynamic programming (DP) is an algorithmic technique used for optimizing the calculation of recursive functions, based on the principle of dividing a problem into overlapping subproblems, solving each subproblem only once, and storing their solutions in a table for future reference. This approach is particularly useful in scenarios where many subproblems recur multiple times during the computation, such as in calculating the Fibonacci sequence, optimizing decision processes, or in algorithms for sequence alignment in bioinformatics. The key advantage of DP is its ability to transform exponential-time recursive algorithms into polynomial-time algorithms by trading computation for memory.

Historical overview: The term "dynamic programming" was coined by Richard Bellman in the 1950s to describe the process of solving problems where one needs to find the best decisions one after another. Bellman aimed to avoid scrutiny by giving his method a name that was deliberately ambiguous. The concept gained popularity quickly as it proved to be a powerful tool for various optimization problems and in the development of efficient algorithms.

Key contributors: Richard Bellman is the principal figure associated with the development of dynamic programming. His foundational work laid the groundwork for its application across various fields including operations research, economics, and computer science. His efforts in establishing the theory and practice of dynamic programming have made it a critical component of many areas in both theoretical and applied disciplines.