Richard Socher
(8 articles)![Word Vector](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fword-vector.webp&w=3840&q=75)
Word Vector
Numerical representations of words that capture their meanings, relationships, and context within a language.
Generality: 690
![Similarity Learning](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fsimilarity-learning.webp&w=3840&q=75)
Similarity Learning
A technique in AI focusing on training models to measure task-related similarity between data points.
Generality: 675
![Embedding Space](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fembedding-space.webp&w=3840&q=75)
Embedding Space
Mathematical representation where high-dimensional vectors of data points, such as text, images, or other complex data types, are transformed into a lower-dimensional space that captures their essential properties.
Generality: 700
![Embedding](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fembedding.webp&w=3840&q=75)
Embedding
Representations of items, like words, sentences, or objects, in a continuous vector space, facilitating their quantitative comparison and manipulation by AI models.
Generality: 865
![Multimodal](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fmultimodal.webp&w=3840&q=75)
Multimodal
AI systems or models that can process and understand information from multiple modalities, such as text, images, and sound.
Generality: 837
![SSL (Self-Supervised Learning)](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fssl-self-supervised-learning.webp&w=3840&q=75)
SSL
Self-Supervised Learning
Self-Supervised Learning
Type of ML where the system learns to predict part of its input from other parts, using its own data structure as supervision.
Generality: 815
![Base Model](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fbase-model.webp&w=3840&q=75)
Base Model
Pre-trained AI model that serves as a starting point for further training or adaptation on specific tasks or datasets.
Generality: 790
![Self-Supervised Pretraining](/vocab/_next/image?url=%2Fvocab%2Fimages%2Farticles%2Fsmall%2Fself-supervised-pretraining.webp&w=3840&q=75)
Self-Supervised Pretraining
ML approach where a model learns to predict parts of the input data from other parts without requiring labeled data, which is then fine-tuned on downstream tasks.
Generality: 725