12ms
4,231 tokens
JD

Projection Controls

Semantic Search

Filter by Metadata

Display

cluster_0 (412)
cluster_1 (587)
cluster_2 (298)
cluster_3 (156)
cluster_4 (203)
cluster_5 (341)
cluster_6 (189)
Clusters
Technology
Science
Mathematics
Literature
History
Business
Medicine

Vector Detail

doc_1847 cluster_1

"Transformer architectures have revolutionized natural language processing by introducing self-attention mechanisms that capture long-range dependencies..."

Source arxiv:2401.0847
Category Technology
Dimensions 3072
Norm 1.0000
Created 2026-03-08

Vector (first 8 dims)

[
  -0.0234, 0.1847, -0.0912,
  0.3201, -0.1456, 0.0789,
  -0.2103, 0.0567, ...
]

Nearest Neighbors

doc_2103 0.9847

"Self-attention allows the model to weigh..."

doc_0912 0.9721

"BERT and GPT models leverage transformer..."

doc_3456 0.9634

"Multi-head attention computes multiple..."

doc_7891 0.9518

"Positional encoding adds sequence info..."

doc_5234 0.9401

"Layer normalization stabilizes training..."

Last Query: "transformer architecture" Vectors: 24,891 Storage: 1.2 GB Projection: 340ms All systems operational