Séminaire informatique musicale

Comment l'informatique change notre manière d'écouter ou de faire de la musique ? Organisé par l'équipe Algomus des laboratoires CRIStAL (CNRS, Université de Lille) et MIS (UPJV, Amiens), le séminaire informatique musicale est ouvert à tous.

Philippe Esling
lundi 14 octobre 2019

Philippe Esling, IRCAM (Paris)

Artificial Creative Intelligence and Data Science (ACIDS) at IRCAM. Orchestration is the subtle art of writing musical pieces for orchestra. It lies at the exact intersection between the symbol (musical writing) and signal (audio recording) representations. We will focus on various applications of a variational learning framework to disentangle factors of audio variation, to regularize the topology of the latent space based on perceptual criteria, working with both audio waveforms and spectral transforms, performing timbre style transfer between instruments and controlling audio synthesizers with our voice. We discuss the development of these approaches as creative tools allowing to increase musical creativity in contemporary music.

Autres sessions en 2019/2020

Intervenant·es à annoncer

Présentations passées


Mark Gotham, Cornell University (New York)

mardi 3 septembre 2019, 11h, CRIStAL (Villeneuve d'Ascq)

Musical Musicologies for the Digital Age. The digital era has begun to revolutionise musicology, both by opening up new avenues for research, and by reforming the methods of more traditional modes of enquiry. In this talk, I introduce some promising sub-disciplines of “Digital Musicology”, discussing the benefits they bring, but also highlighting where caution is needed to preserve musical validity and usefulness – that is, to make sure these digital musicologies remain “musical”. The talk draws on examples from my own research output, and focusses primarily on the analysis and manipulation of scores.

vendredi 6 septembre 2019, MIS (Amiens)

Computational Approaches to ‘Representative’ Examples and Restrictive ‘Rules’ in Music Pedagogy Theory and analysis courses are often highly focused on concepts such as ‘augmented chords’. Through both traditional and computer-assisted retrieval methods, this talk presents lists at an unprecedented scale for several such concepts. Empirical studies then use this data to assess representativeness through analysis of: the extent of augmented chord usage over time (c.1550–c.1900); the most common resolutions of that chord; and the metrical positions at which it is most often used. The talk also proposes the pedagogical use of ultra-literal rule-realisation to generate ‘devil’s advocate’ scores that clarify the consequences of those rules, and thereby illuminate both the status of the rules and the nature of the works they seek to describe. Examples include the demonstration of https://fourscoreandmore.org/cut-outs/ – where teachers can freely generate such scores, tailored to their classes’ requirements.

Florent Jacquemard
lundi 23 septembre 2019, 14h, CRIStAL (Villeneuve d'Acsq)

Florent Jacquemard, Inria et CNAM (Paris)

Parse-based transcription Framework for Coupled Rhythm Quantization and Score Structuring We present a formal language-based framework for MIDI-to-score transcription, the problem of converting a sequence of symbolic musical events with arbitrary timestamps into a structured music score. The framework aims in particular at solving in one pass the two subproblems of rhythm quantization and score production. It relies, throughout the process, on an apriori hierarchical model of music notation given by weighted tree automata. We show that this coupled approach helps to make relevant and interrelated decisions, and we present an algorithm somehow generalizing Dijskstra's shortest path for computing transcription solutions optimal with respect to both the fitness of the quantization to the input, and a measure of complexity of music notation.

Markus Neuwirth
lundi 11 mars 2019, 14h, Plaine Images (Imaginarium salle FD1, 1er étage)

Markus Neuwirth, DMCL, EPFL (Lausanne)

Statistical Explorations of Contrapuntal Schemata, Cadences, and Harmony in the Annotated Mozart and Beethoven Datasets. In the past roughly two decades, the study of voice-leading schemata has received increasing attention in the context of historically informed music theory (e.g., Gjerdingen, 2007; Byros, 2013). Schemata are contrapuntal structures with a fixed three-voice skeleton that have been elaborated in a great variety of ways throughout the history of music. Despite the historical importance of schemata, music-theoretical research suffers from a lack of empirical foundation. Neither hand-annotated nor automatically labelled corpora are available to date. In my talk, I will first propose a framework for theorizing schemata; second, I will introduce a skipgram approach (Finkensiep et al., 2018) as a powerful computational technique for schema finding; and third, I will show for three selected schemata (Fonte, Prinner, and cadences) some preliminary empirical findings, using the corpus of Mozart’s Piano Sonatas. The final part of the talk will contrast a schema-based with a harmony-based approach to tonal harmony, drawing on the Annotated Beethoven Corpus (ABC; Neuwirth et al., 2018) and discussing typical statistical properties of tonal harmony.

lundi 10 décembre 2018

Gabriele Medeot et Reinier De Valk, Jukedeck (Londres)

The problem of structure in symbolic music generation. In the first half of our talk we will discuss several general problems encountered in symbolic music generation, pertaining to, for example, acquisition and preprocessing of training data, issues with existing attempts, and evaluation of the generated output. We will then showcase a simple recurrent neural network for generating melodies. All this serves as an introduction to the second half of the talk, where we will show how we can improve structure in the output of the aforementioned melody model. To this end, we use StructureNet (as presented at ISMIR 2018), a recurrent neural network that works in tandem with the melody network.

16-18 mai 2018 // Séances spéciales à Amiens

JIM 2018 // Journées d'Informatique Musicale

Présentations, concerts, ateliers et table ronde entreprises... La communauté francophone en informatique musicale se retrouve durant trois jours aux JIM 2018 Amiens, cette année avec un focus Informatique musicale et pédagogie. Frédéric Bimbot et François Pachet viendront présenter une conférence invitée sur leur travaux.

[Benjamin Martin]
lundi 16 avril 2018, 14h, Plaine Images

Benjamin Martin, Simbals (Bordeaux)

Identification audio chez Simbals, start-up issue de la recherche académique. Benjamin Martin nous présentera les travaux de Simbals, start-up bordelaise fondée par des chercheurs du LaBRI, pour identifier des contenus audio dans des contextes applicatifs variés. Il parlera de techniques mises au point pour indexer efficacement des bases de données commerciales pour l'identification audio, en détaillant le cheminement et les problématiques liées au transfert technologique de résultats issus de la recherche académique.

[Jérôme Nika]
lundi 19 février 2018, 14h, Plaine Images

Jérôme Nika, IRCAM (Paris)

Processus génératifs pour la co-improvisation musicale homme-machine. Jérôme Nika présentera une synthèse des travaux réalisés à l'Ircam sur le guidage de l’improvisation musicale homme-machine. Le projet ANR DYCI2 vise à rapprocher les paradigmes de génération musicale « libre », « réactive » (contrôlée par l'écoute), et « guidée par scénario » (contrôlée par une structure temporelle définie comme une progression harmonique). Ces modèles génératifs, modèles d'ordonnancement, et architectures réactives ont été implémentés dans les systèmes Somax, ImproteK, et DYCI2, en interaction avec de nombreux musiciens improvisateurs.

Soutenir le séminaire d'informatique musicale : 1FKwMQrMN1iReqZWT8DZn9BGpygnC76Ki5

Graphisme: Armelle Thuillier / freepik.com