🇫🇷 Français
🇬🇧 English

PhD 2022-2025 – Natural Language Processing approaches in the musical domain : suitability, performance and limits

  • Fully funded PhD 2022-2025 in natural language processing and computer music
  • In Lille, France (CRIStAL, CNRS, Inria, Université de Lille), within the Magnet team
  • Supervisors and contacts:
  • Deadline for applications: (closed)
  • Links: https://www.algomus.fr/jobs
  • The Algomus computer music research team (CRIStAL, UMR CNRS 9189, University of Lille) is specialized in algorithms to assist analysis and composition of musical scores.
  • The Magnet research team (Inria, CRIStAL, Université de Lille) is specialized in machine learning and Natural Language Processing (NLP).*

Context

In the last ten years, deep neural networks have intensely been investigated in the field of Natural Language Processing (NLP). This research has led to multiple applications including automated corpus annotation and content generation.

The temporal nature of music promotes its representation as sequences of elements at various scales, most commonly musical notes or chords, that are comparable to sequences of words in NLP. This sequential point of view as well as the common assimilation of music to some kind of language, have motivated the use of methods originally designed for NLP, for Music Information Retrieval (MIR) tasks, including musical analysis and generation.

Objectives

The main goal of this research will be to evaluate the adaptability, performance and relevance of NLP techniques when applied to the symbolic musical domain. We will in particular focus on the three following principles:

  • self-attention
  • tokenization
  • transfer learning

These principles will be investigated through the lens of the structural and epistemological differences existing between natural language and music.

Profile of the candidate

  • Master’s degree (or equivalent) in computer science, machine learning, natural language processing.
  • Musical knowledge and practice would be preferable.

References

  • [1] Vaswani, Ashish, et al. “Attention is all you need.”
  • [2] Huang, Cheng-Zhi Anna, et al. “Music transformer.”
  • [3] Devlin, Jacob, et al. “Bert: Pre-training of deep bidirectional transformers for language understanding.”
  • [4] Wang, Ziyu, and Gus Xia. “Musebert: pre-training of music representation for music understanding and controllable generation.”