In machine learning and related fields, a function approximator is a tool or algorithm that estimates the mapping from input data to output values. These approximators are essential when the underlying function is too complex to model explicitly or when the function is unknown but can be inferred from data. Common examples include neural networks, polynomial regression, and spline interpolation. They are widely used in reinforcement learning to estimate value functions or policies, in supervised learning for predictive modeling, and in control systems for system identification and state estimation. The choice of function approximator depends on factors such as the complexity of the function, the amount of available data, and the desired accuracy.

Historical overview: The concept of function approximation dates back to early mathematical studies, but its formal use in computational contexts began to rise in the mid-20th century. It gained significant traction in the 1980s and 1990s with the advent of neural networks and machine learning algorithms that relied heavily on approximation techniques.

Key contributors: Key contributors to the development and application of function approximators include John von Neumann and Norbert Wiener, who laid the groundwork for cybernetics and neural networks, and Richard Bellman, whose work on dynamic programming heavily influenced the use of function approximation in reinforcement learning. Later, researchers such as Geoffrey Hinton and Yann LeCun advanced the field with the development of deep learning techniques, significantly enhancing the capabilities of function approximators.