Towards Knowledge Bases for Understanding Synthesiser Programming


SPEAKER: Dr. Ross Clement


Wednesday 15th of February 2012 - 2pm

VENUE: IOCT


To watch the video of the seminar, use the following links:

High Quality  rtsp://helix.dmu.ac.uk/media2/68127033_hi.rm

Low Quality rtsp://helix.dmu.ac.uk/media2/68127033_lo.rm

 

This talk discusses work towards creating programs that can automatically create "voices" for digital synthesisers. This talk analyses why a number of knowledge-free, Machine Learning based approaches to this problem have failed. Current research which is investigating building a knowledge base of rules described in Predicate Logic, and using elements of Fuzzy Logic, which should support the creation of automatic synthesiser programming systems which understand the relationship between raw program data and the sound produced. The current area of focus is using these rules to produce verbal explanations of how synthesiser programs work, as a subgoal of actually creating synthesiser programs through reasoning with high level knowledge.