User Tools

Site Tools


a_vocabulary-free_infinity-gram_model_for_nonparametric_bayesian_chord_progression_analysis

A Vocabulary-Free Infinity-Gram Model for Nonparametric Bayesian Chord Progression Analysis

Authors Kazuyoshi Yoshii, Masataka Goto
Publication Info Retrieved here
Retrieval date 3/24/12

Proposes a probabilistic, vocabulary-free infinity-gram model for chord progression analysis using Bayesian nonparametrics.

Notation

Chords can be notated as 17 variations on 12 pitch classes, such as C:maj or F#:dim7 giving 205 labels (<math>17*12+1</math>) or based explicitly on their components, such as C:100010010000 for C major, giving 49153 labels (<math>2^{12}*12+1</math>).

N-gram models

Chord patterns are closely related to composer style and musical genre and can improve automatic chord recognition algorithms (for example, by fusing with a joint probabilistic model of keys, chords, and bass notes) or for automatic transcription. N-gram models are typically used to predict chords, which treat a subsequence of n chords using as dependent on the previous (n-1) chords using a Markov model trained on observed data. Not all n-grams are observed, so heuristic smoothing must be used to assign unobserved n-grams a non-zero probability, which has no solid theoretical foundation. N-gram models also assume that the chord length is always n, and that only a limited set of chord labels are defined in advance.

More specifically, given a chord vocabulary <math>W</math> with size <math>V</math>, <math>w \in W</math> is a chord and <math>u \in W^{n-1}</math> is a context of <math>n-1</math> chords. Our observed data is <math>X</math>, a single sequence of <math>M</math> chords <math>x_m \in W</math>, with the assumption that <math>x_m</math> only depends on the past <math>n-1</math> chords. The goal is to estimate <math>P_u(w|X)</math>, the probability of chord <math>w</math> given context <math>u</math>. Letting <math>c_{uw}</math> be the number of occurrences of chord <math>w</math> following context <math>u</math>, the naive maximum likelihood estimate is given by <math>P^{ML}_u(w|X) = \frac{c_{uw}}{c_{u\dot}}</math> where <math>c_{u\dot}</math> is the total number of chords which follow context <math>u</math> in the observed data; this causes <math>P_u(w|X)</math> if <math>c_{uw} = 0</math>. Smoothing techniques used to prevent zero-probability include Kneser-Ney and its variations.

The hierarchical Pitman-Yor language model (HPYLM) uses Pitman-Yor (PY) processes, which are distributions over distributions over a sample space, written as <math>G ~ PY(d, \th, G_0)</math> letting <math>d, \th</math> be positive real numbers (discount and strength) and <math>G_0</math> be a base measure distribution over the sample space. The HPYLM is formulated by layering PY processes in a hierarchical Bayesian manner. Given a unigram distribution <math>G_\phi</math> over <math>W</math>, where <math>\phi</math> is the empty context and <math>G_\phi(w)</math> is the unigram probability of chord <math>w</math>. An n-gram distribution <math>G_u</math> is drawn from a PY with a base measure <math>G_\pi</math> as <math>G_u ~ PY(d_{|u|}, \th_{|u|}, G_{\pi(u)})</math> where <math>\pi(u)</math> is a <math>u</math> without its first chord and <math>d_{|u|}</math> and <math>\th_{|u|}</math> are discount and strength parameters which depend on <math>|u|</math>; as such <math>G_u</math> is generated recursively with <math>G_0(w) = 1/V</math>.

Once the HPYLM is defined, observed data <math>X</math> is generated according to the Chinese restaurant franchise (CRF) stochastic process, where contexts are like restaurants, <math>M</math> observed variables <math>X</math> are like customers, and <math>V</math> chord types <math>W</math> are like dishes; each restaurant has an unbounded number of tables, each of which is served a dish. Supposing that <math>x_1,\cdots,x_M</math> are generated sequentially, customer <math>x_m</math> enters restaurant <math>u = x_{m-(n-1}} \cdots x_{m-1}</math> with depth <math>n-1</math>. <math>t_{uw}</math> is the number of tables serving dish <math>w</math> in restaurant <math>u</math>; <math>c_{uwk}</math> is the number of customers sitting at table <math>k</math> and eating dish <math>w</math>. <math>x_m</math> sits at either an existing table <math>k</math> and eats <math>w</math> with probability proportional to <math>c_{uwk} - d_{|u|}</math>, setting <math>x_m = w</math> and incrementing <math>c_{uwk}</math>, or at a new table <math>k = t_{u\dot} + 1</math> with probability proportional to <math>d_{|u|}t_{u\dot} + \th_{|u|}</math>, sending a proxy customer to <math>\pi(u)</math> where he behaves in a recursive manner - if he eventually eats <math>w</math> at in restaurant <math>\pi(u)</math>, <math>w</math> is also served at the new table <math>k</math> in restaurant <math>u</math> and <math>x_m</math> eats <math>w</math>, incrementing <math>t_{uw}</math> and <math>c_{uwk}</math> and setting <math>x_m = w</math>. If the proxy customer is sent to restaurant <math>\phi</math> then a dish is served according to the global base measure <math>G_0</math>. Specifically, given a seating arrangement <math>S</math>, a chord <math>w</math> following context <math>u</math> is generated according to <math>P_u^{HPY}(w|S) = \frac{c_{uw\dot} - d_{|u|}t_{uw}}{c_{u\dot}+\th_{|u|}} + \frac{d_{|u|}t_{|u|\dot} + \th_{|u|}}{c_{u\dot} + \th_{|u|}}P_{\pi(u)}^{HPY}(w|S)</math> where <math>P_{\pi(u)}^{HPY}(w|S)</math> is given by substituting <math>\pi(u)</math> back into <math>u</math>.

In order to estimate <math>P_u(w|X)</math> in a Bayesian maner, the expected value of <math>P_u^{HPY}(w|S)</math> is calculated under the CRF <math>P(S|X)</math> by <math>P_u^{HPY}(w|X) = \sum_{S} {P_u^{HPY}(w|S)P(S|X)} \approx \frac{1}{L}\sum){l=1}^{L}P_u^{HPY}(w|S_l}</math> using Gibbs sampling where <math>L</math> is the number of many iid seating arrangements sampled from <math>p(S|X)</math>. The Gibbs sampling is carried out by adding all customers according to a posterior CRF, where each customer <math>x_m = w</math> sits at a new or existing table, then a customer <math>x_m</math> is selected randomly and removed from the tree and all proxies and tables which become empty are also removed, and <math>x_m</math> is added into the tree again according to the posterior CRF. The <math>d_0, \cdots, d_{n-1}</math> and <math>\th_0, \cdots ,\th_{n-1}</math> parameters are set as beta and gamma prior distributions and their values are sampled from posterior distributions.

HPYLM forces all customers to enter restaurants at depth <math>n-1</math>; the variable-order PY language model (VPYLM) allows each customer to enter a restaurant at variable depth. Chords <math>x_m</math> are associated with a latent variable <math>z_m</math> that indicates the value of <math>n</math>; all possible values of <math>z_m</math> are considered, making it an infinity-gram model. To determine <math>z_m</math>, <math>x_m</math> descends the tree following <math>\phi \rightarrow x_{m-1} \rightarrow x_{m-2} \rightarrow \cdots</math> by backtracking the context <math>u</math>, stopping at restaurant <math>u_i</math> with probability <math>\et_u_i</math>, giving <math>z_m = n</math> probability <math>P_u(n|\et) = \et_u_{n-1} \prod_{i=0}^{n-2}(1-\et_u_i)</math>. A beta prior distribution with parameters <math>\al</math> and <math>\be</math> is placed on <math>\et</math> so that <math>p(\et) = \sum_{u\in tree}Beta(\et_u|\al,\be)</math>. Given <math>z_m</math>, <math>x_m</math> is determined according to a CRF with seating arrangement <math>S</math>, predicting chord <math>w</math> following context <math>u</math> by <math>P_u^{VPY}(w|S) = \sum_n P_u^{VPY}(w|n, S)P_u(n|S)</math> where <math>P_u^{VPY}(w|n, S)</math> is obtained from the giant <math>P_u^{HPY}(w|X)</math> equation above and <math>P_u(n|S) = \int P_u(n|\et)p(\et|S)d\et</math>. The predictive distribution is found as in HPYLM, except that VPYLM needs to sample <math>z_m</math> from its posterior distribution before adding customer <math>x_m</math> to the tree by <math>P_u(n|S, w)\propto P_u(w, n|S) = P_u^{VPY}(w|n,S)P_u(n|S)</math>.

To define a finite vocabulary even though the vocabulary is growing steadily, NPYLM was formulated using a global base measure <math>G_0</math> over a countable infinite number of variable-length words. In language modeling, a spelling model based on letter-level VPYLM is given as a global base measure <math>G_0</math> of a word-level VPYLM, regarding each word as a sequence of letters following a letter-level CRF where the word length is assumed to follow a Poisson distribution.

Vocabulary-free infinity-gram model

A vocabulary-free infinity-gram model can be created by extending modern nonparametric n-gram models. In this way, the distribution of a next chord can be formalized with a probabilistic generative model of chord sequences, each chord sequence can have variable length, and any combination of notes can constitute a chord. A chord sequence model can be proposed similarly to the NPYLM model, except that chords are simultaneous combinations of notes and words are temporal sequences of letters.

Chords can be handled by formulating a probabilistic model based on the component notation as a global base measure of <math>G_0</math> of a chord-level VPYLM, with each chord <math>w</math> written as <math>w_0:w_1 \cdots w_{12}</math> with <math>w_0</math> following a 13-dimensional discrete distribution and the components following Bernoulli distributions <math>G_0(w) = p(w|\pi, \tau) = \pi_w_0 \sum_{i=1}^{12}\tau_i^{w_i}(1-\tau_i)^{1-w_i}</math> where <math>\pi = \{\pi_C,\pi_{C#} \cdots, \pi_N\}</math> represents the probabilities of each pitch class and “N” (no chord) and <math>\tau = \{\tau_1, \cdots \tau_12\}</math> indicates the existence probabilities of the respective degrees with prior distributions <math>p(\pi, \tau) = Dir(\pi|a_0)\prod_{i=1}^{12}Beta(\tau_i|b_0, c_0)</math> with parameters <math>a0, b0, c0 = .5</math>.

Given a seating arrangement <math>S</math>, the posterior distribution of <math>\pi</math> and <math>\tau</math> can be calculated as <math>p(\pi, \tau) = Dir(\pi|a_0 + n)\prod_{i=1}^{12}Beta(\tau_i|b_0 + n_i, c_0 + \bar{n}_i)</math> where <math>v</math> is one of the pitch classes or “N”, <math>n_v</math> is the number of tables serving dishes with root note <math>v</math> in the restaurant <math>\phi</math>, <math>n_i</math> is the number of tables serving dishes with the <math>i</math>-th note in <math>\phi</math>, and <math>\bar{n}_i</math> is the number of tables serving dishes without the <math>i</math>-th note in <math>\phi</math>. The predictive distribution of the next chord and the Gibbs sampling algorithm are as in VPYLM, except when a customer sits at a new table in the root restaurant, the values of <math>n_v</math> and <math>n_i</math> or <math>\bar{n}_i</math> are incremented according to the components of the target chord and are incremented when a table is removed.

To test this model, chord sequences for 137 major-scale Beatles songs were used after transposition to C-major key, comprising of 10761 chords with 103 chord types for label-based notation and 149 chord types for component-based notation. The effectiveness of the infinity-gram model was compared to a variety of existing methods and the HPYLM and VPYLM models incorporating the vocabulary-free base measure <math>G_0</math> using label-based notation. The effectiveness of vocabulary-free modeling in component-based notation was also measured. All tests used 10-fold cross validation with perplexity (average number of next-chord candidates) as the metric. VPYLM yielded the lowest perplexity in the label-based model, with best performance with <math>n</math> was marginalized out, taking all possibilities into account. For the component-based model, vocabulary-free VPFYLM yielded perplexity substantially lower than the other models, suggesting robustness to sparseness.

a_vocabulary-free_infinity-gram_model_for_nonparametric_bayesian_chord_progression_analysis.txt · Last modified: 2015/12/17 21:59 (external edit)