Job offers / Offres de visites, stages, thĂšses et emploi

Feel free to contact us at contact@algomus.fr if you are interested in joining our team, even if the subject is not detailed below. You can also have a look on job offers in the previous years to see the kind of subjects we like to work on.

Visits of PhD students or post-docs (between 2-3 weeks and 2-3 months) can be also arranged through mobility fundings. Contacts should be taken several months in advance. (open)

2026 Internship positions will be available (2-6 months), this year including the following themes:

Some other detailed subjects may be posted in November 2025.

PhD positions, 2026-29 will be published soon.

Stages 2026 de recherche et/ou développement en informatique musicale


Machine Listening for Reinforcement Learning with Agent-based Music Performance Systems

Context

This internship takes place in the scope of the MICCDroP project, which aims at integrating mechanism of continual learning for long-term partnerships in AI-human musical interactions with agent-based music performance systems. In particular, one aspect of this project is the use of methods based on reinforcement learning and curiosity-driven learning to equip an AI agent with mechanisms for adaptive engagement in creative processes across multiple practice sessions.

This project sits at the intersection of several key areas, primarily Music Information Retrieval (MIR), Lifelong Learning for Long-Term Human-AI Interaction (LEAP-HRI), and New Interfaces for Musical Expression (NIME). It will be supervised by Ken Déguernel (CNRS Researcher) and Claudio Panariello (Univ. Lille postdoc and composer).

Objective

When using reinforcement learning, user feedback can easily be gathered during the interaction through external means (button, pedal, gesture) to indicate whether the interaction is good or not, or a posteriori during reflective session. The goal of this internship, however, is to develop machine listening methods in order to gather the feedback through the musical interaction itself. Several dynamics of interaction will be tested: level of engagement, consistency of play, a/synchronicity…

Tasks:

  1. Explore and implement different machine listening methodologies.
  2. Integrate the developed machine listening system into existing MICCDroP musical agent.
  3. Test the system in situ with professional performers.

Qualifications

Needed:

  • Last year of Master’s degree in Machine Learning, Music Computing
  • Strong background in Signal Processing for Audio

Preferred:

  • Experience with music programming languages (Max/MSP, SuperCollider, …)
  • Personal music practice

References

  • Jordanous (2017). Co-creativity and perceptions of computational agents in co-creativity. International Conference on Computational Creativity.
  • Nika et al. (2017). DYCI2 agents: merging the ‘free’, ‘reactive’ and ‘scenario-based’ music generation paradigms. International Computer Music Conference.
  • Collins, N. (2014). Automatic Composition of Electroacoustic Art Music Utilizing Machine Listening. Computer Music Journal 36(3).
  • Tremblay, P.A. et al. (2021). Enabling Programmatic Data Mining as Musicking: The Fluid Corpus Manipulation Toolkit. Computer Music Journal, 45(2).
  • Scurto et al. (2021). Designing deep reinforcement learning for human parameter exploration. ACM Transactions on Computer-Human Interaction, 28(1).
  • Parisi et al. (2019). Continual lifelong learning with neural networks: A review. Neural networks, 113.
  • Small, C. (1998). Musicking: The meanings of performing and listening. Wesleyan University Press.

Stage de recherche (M2) 2026: Similarities across scales and musical dimensions

Context

Musical data represent a considerable amount of information of different nature. However, despite the impressive results of recent generative models, the structural properties of music are underutilized, due to the lack of a generic paradigm that can account for the similarity relationships between elements, temporal sequences, sections of one or several pieces at various levels of representation, across different musical dimensions (sound objects, melody, harmony, texture, etc.). Current tools fail to render this multiscale structure in musical data, in particular they don’t allow a real control on the parameters of the music that is composed (or generated). The goal of the ANR Project MUSISCALE is to develop methods and software tools that can account for the relationships between similar elements at different scales, finely enough to be able to recompose a complete musical object from these elements, or to create variations of it.

Objectives

The goal of this internship is

  1. to explore different notions of similarity between symbolic objects (not only related to pitches, but rhythm, texture, or other similarity criteria on different scale levels will be considered)
  2. to model and implement algorithms to automatically analyze music scores
  3. to study the relation between those segmentations and the global form of the pieces
  4. to propose transformations of the objects for creative purposes, and study the impact of those transformations on the global form.

Although this subject is mostly on symbolic music, timbre or audio analysis/transformation are part of some extensions that could be considered.

Qualifications

  • Last year of Master in Computer Science or Music Computing
  • Knowledge in music theory is recommended,
  • Ideally, musical practice
  • Candidates at ease both with symbolic and audio analysis are welcome

Opportunities

The ANR funding includes opportunities to pursue a PhD in our lab on this topic or related topics.

Références

  • ALLEGRAUD P ., BIGO L., FEISTHAUER L., GIRAUD M., GROULT R., LEGUY E., LEVÉ F. “Learning Sonata Form Structure on Mozart’s String Quartets”. Transactions of the International Society for Music Information Retrieval (TISMIR), 2(1):82–96, 2019.

  • BHANDARI, K., and COLTON, S. “Motifs, phrases, and beyond: The modelling of structure in symbolic music generation.” In International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar), pp. 33-51. Cham: Springer Nature Switzerland, 2024.

  • BUISSON M., McFEE B., ESSID S., and CRAYENCOUR H-C. “Learning Multi-level Representations for Hierarchical Music Structure Analysis”. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2022.

  • CALANDRA, J., CHOUVEL, J. M., & DESAINTHE-CATHERINE, M. “Hierarchisation algorithm for MORFOS: a music analysis software.” In Proceedings of the International Computer Music Conference (ICMC), 2025.

  • COUTURIER L., BIGO L., and LEVÉ F. “Comparing Texture in Piano Scores”. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2023.

  • NIETO O., MYSORE G.J., WANG C. et al. “Audio-Based Music Structure Analysis: Current Trends, Open Challenges,and Applications”. Transactions of the International Society for Music Information Retrieval (TISMIR), 3(1):246–263, 2020

Archived offers