Job offers / Offres de visites, stages, thĂšses et emploi
Feel free to contact us at contact@algomus.fr if you are interested in joining our team, even if the subject is not detailed below. You can also have a look on job offers in the previous years to see the kind of subjects we like to work on.
Visits of PhD students or post-docs (between 2-3 weeks and 2-3 months) can be also arranged through mobility fundings. Contacts should be taken several months in advance. (open)
2026 Internship positions will be available (2-6 months), this year including the following themes:
- Co-creativity, Machine learning: Machine Listening for Reinforcement Learning with Agent-based Music Performance Systems (K. Déguernel, C. Panariello) (open)
- Music structure: Similarities across scales and musical dimensions (F. Levé, in Amiens, Y. Teytaud) (open)
- Corpus / Web development: Corpus musicaux sur la plateforme web Dezrann (E. Leguy, M. Giraud) (soon)
- Pedagogy / Music training: GĂ©nĂ©ration d’exercices mĂ©lodiques et rythmiques pour l’apprentissage du solfĂšge (M. Giraud, F. LevĂ©, with Solfy) (soon)
- Video Game Music / Corpus / Music Perception, Loop in Video Game Music: building and analyzing the UFO-50 corpus (Y. Teytaut, F. Levé) (soon)
Some other detailed subjects may be posted in November 2025.
PhD positions, 2026-29 will be published soon.
Stages 2026 de recherche et/ou développement en informatique musicale
Machine Listening for Reinforcement Learning with Agent-based Music Performance Systems
- Final year of Master’s degree internship
- Duration: 4-6 months, with legal “gratification” (550âŹ-600âŹ/month)
- Location: Lille (Villeneuve d’Ascq, Laboratoire CRIStAL, mĂ©tro 4 Cantons); partial remote work is possible
- Supervisors and contacts: Ken Déguernel (CR CNRS) & Claudio Panariello (postdoc Univ. Lille)
- Full offer, and links: https://www.algomus.fr/jobs
- Candidatures ouvertes - Open applications
Context
This internship takes place in the scope of the MICCDroP project, which aims at integrating mechanism of continual learning for long-term partnerships in AI-human musical interactions with agent-based music performance systems. In particular, one aspect of this project is the use of methods based on reinforcement learning and curiosity-driven learning to equip an AI agent with mechanisms for adaptive engagement in creative processes across multiple practice sessions.
This project sits at the intersection of several key areas, primarily Music Information Retrieval (MIR), Lifelong Learning for Long-Term Human-AI Interaction (LEAP-HRI), and New Interfaces for Musical Expression (NIME). It will be supervised by Ken Déguernel (CNRS Researcher) and Claudio Panariello (Univ. Lille postdoc and composer).
Objective
When using reinforcement learning, user feedback can easily be gathered during the interaction through external means (button, pedal, gesture) to indicate whether the interaction is good or not, or a posteriori during reflective session. The goal of this internship, however, is to develop machine listening methods in order to gather the feedback through the musical interaction itself. Several dynamics of interaction will be tested: level of engagement, consistency of play, a/synchronicity…
Tasks:
- Explore and implement different machine listening methodologies.
- Integrate the developed machine listening system into existing MICCDroP musical agent.
- Test the system in situ with professional performers.
Qualifications
Needed:
- Last year of Master’s degree in Machine Learning, Music Computing
- Strong background in Signal Processing for Audio
Preferred:
- Experience with music programming languages (Max/MSP, SuperCollider, …)
- Personal music practice
References
- Jordanous (2017). Co-creativity and perceptions of computational agents in co-creativity. International Conference on Computational Creativity.
- Nika et al. (2017). DYCI2 agents: merging the âfreeâ, âreactiveâ and âscenario-basedâ music generation paradigms. International Computer Music Conference.
- Collins, N. (2014). Automatic Composition of Electroacoustic Art Music Utilizing Machine Listening. Computer Music Journal 36(3).
- Tremblay, P.A. et al. (2021). Enabling Programmatic Data Mining as Musicking: The Fluid Corpus Manipulation Toolkit. Computer Music Journal, 45(2).
- Scurto et al. (2021). Designing deep reinforcement learning for human parameter exploration. ACM Transactions on Computer-Human Interaction, 28(1).
- Parisi et al. (2019). Continual lifelong learning with neural networks: A review. Neural networks, 113.
- Small, C. (1998). Musicking: The meanings of performing and listening. Wesleyan University Press.
Stage de recherche (M2) 2026: Similarities across scales and musical dimensions
- Final year of Master’s degree internship
- Duration: 5-6 months, with legal âgratificationâ (550âŹ-600âŹ/month)
- Themes: MIR, symbolic music
- Lieu: Amiens (laboratoire MIS, UPJV, collaboration avec Algomus)
- Encadrement et contacts: Florence Levé (MIS) et Yann Teytaut (CRIStAL)
- Annonce et liens: https://www.algomus.fr/jobs
- Candidatures ouvertes - Open applications
Context
Musical data represent a considerable amount of information of different nature. However, despite the impressive results of recent generative models, the structural properties of music are underutilized, due to the lack of a generic paradigm that can account for the similarity relationships between elements, temporal sequences, sections of one or several pieces at various levels of representation, across different musical dimensions (sound objects, melody, harmony, texture, etc.). Current tools fail to render this multiscale structure in musical data, in particular they donât allow a real control on the parameters of the music that is composed (or generated). The goal of the ANR Project MUSISCALE is to develop methods and software tools that can account for the relationships between similar elements at different scales, finely enough to be able to recompose a complete musical object from these elements, or to create variations of it.
Objectives
The goal of this internship is
- to explore different notions of similarity between symbolic objects (not only related to pitches, but rhythm, texture, or other similarity criteria on different scale levels will be considered)
- to model and implement algorithms to automatically analyze music scores
- to study the relation between those segmentations and the global form of the pieces
- to propose transformations of the objects for creative purposes, and study the impact of those transformations on the global form.
Although this subject is mostly on symbolic music, timbre or audio analysis/transformation are part of some extensions that could be considered.
Qualifications
- Last year of Master in Computer Science or Music Computing
- Knowledge in music theory is recommended,
- Ideally, musical practice
- Candidates at ease both with symbolic and audio analysis are welcome
Opportunities
The ANR funding includes opportunities to pursue a PhD in our lab on this topic or related topics.
Références
-
ALLEGRAUD P ., BIGO L., FEISTHAUER L., GIRAUD M., GROULT R., LEGUY E., LEVĂ F. âLearning Sonata Form Structure on Mozartâs String Quartetsâ. Transactions of the International Society for Music Information Retrieval (TISMIR), 2(1):82â96, 2019.
-
BHANDARI, K., and COLTON, S. “Motifs, phrases, and beyond: The modelling of structure in symbolic music generation.” In International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar), pp. 33-51. Cham: Springer Nature Switzerland, 2024.
-
BUISSON M., McFEE B., ESSID S., and CRAYENCOUR H-C. âLearning Multi-level Representations for Hierarchical Music Structure Analysisâ. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2022.
-
CALANDRA, J., CHOUVEL, J. M., & DESAINTHE-CATHERINE, M. “Hierarchisation algorithm for MORFOS: a music analysis software.” In Proceedings of the International Computer Music Conference (ICMC), 2025.
-
COUTURIER L., BIGO L., and LEVĂ F. âComparing Texture in Piano Scoresâ. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2023.
-
NIETO O., MYSORE G.J., WANG C. et al. âAudio-Based Music Structure Analysis: Current Trends, Open Challenges,and Applicationsâ. Transactions of the International Society for Music Information Retrieval (TISMIR), 3(1):246â263, 2020