In the context of neural networks, tensors are pivotal for encapsulating data with various dimensions—scalars (0D tensors), vectors (1D tensors), matrices (2D tensors), and higher-dimensional arrays (3D, 4D, ..., nD tensors). They enable the efficient representation and manipulation of complex datasets and model parameters. Tensors are extensively used in deep learning libraries such as TensorFlow and PyTorch, facilitating operations like convolution, transformation, and backpropagation by efficiently handling computations on large-scale data. The multidimensional nature of tensors allows for the representation of complex data structures like images, videos, and text, where each dimension can signify different aspects of the data, such as time, space, channels, or features, making tensors indispensable in training and deploying neural network models.

Historical overview: The concept of a tensor in mathematics predates its application in computer science and neural networks, with its roots in the late 19th and early 20th centuries, particularly in differential geometry and physics for describing physical properties in multiple dimensions. The adoption of tensors in neural networks gained momentum with the rise of deep learning and the development of specialized libraries for tensor operations in the 2010s.

Key contributors: While the mathematical concept of tensors was developed by mathematicians like Gregorio Ricci-Curbastro and Tullio Levi-Civita in the context of tensor calculus, the application and popularization of tensors in neural networks and AI are largely attributed to the developers and researchers behind deep learning libraries such as TensorFlow (developed by the Google Brain team) and PyTorch (developed by Facebook's AI Research lab). These libraries made tensor operations accessible and efficient, significantly contributing to the rapid advancements in AI and deep learning research and applications.