This talk discusses work towards creating programs that can automatically create "voices" for digital synthesisers. This talk analyses why a number of knowledge-free, Machine Learning based approaches to this problem have failed. Current research which is investigating building a knowledge base of rules described in Predicate Logic, and using elements of Fuzzy Logic, which should support the creation of automatic synthesiser programming systems which understand the relationship between raw program data and the sound produced. The current area of focus is using these rules to produce verbal explanations of how synthesiser programs work, as a subgoal of actually creating synthesiser programs through reasoning with high level knowledge.