Skip to main content

Module model

Module model 

Source
Expand description

Neural network weights and forward/backward pass for Skip-gram and CBOW with Negative Sampling.

§Weight Matrices

  • input_weights (W_in): shape [vocab_size × embedding_dim] — the “input” or center-word embedding matrix.
  • output_weights (W_out): shape [vocab_size × embedding_dim] — the context/output embedding matrix used in the dot-product scoring.

§Negative Sampling Loss

For a positive pair (center c, context o) and k negatives n_i:

L = log σ(v_o · v_c) + Σ log σ(-v_{n_i} · v_c)

Gradients are applied in-place via SGD.

Structs§

Model
Core weight matrices for Word2Vec.

Functions§

sentence_to_pairs
Generate training examples from a tokenised sentence.