Otoacoustic emissions are something I have been interested in for a long time, ever since discovering the work of Maryanne Amacher.
Engaging the ear in such a way that it starts producing its’ own music is something that would suit the theme of embodiment I am looking into these days. It also interests me from the perspective of the virtual vs. real sound question. The phenomena is also highly dependent on spacialisation.

Theory says the acoustic tones need to be pure sine tones of a frequency between 2kHz -5 kHz, with f2-f1 < 150Hz. There are two tones produced, one at the cubic difference 2f2-f1, and one at the quadratic difference f2-f1. The most common and effective way of obtaining DPOAE’s seems to be through focusing on the latter.

I thought this would be a good occasion to try make use of Pure Data. I have used a patcher named DataPlug that allows PureData to be used inside the DAW as a plugin. The patch is fairly simple: it takes the note played on the MIDI keyboard, multiplies it’s frequency by 64 (2 to the power of 6, taking the sound 6 octaves higher), and plays the tone through the left channel. For the right channel the multiplied frequency is added up to the original one. You can therefore play the third tone directly without having to randomize the acoustic frequencies (I am referring to the Ghost Tones which uses randomization https://fadedinstruments.com/ghost-tones/)
‘The precise frequency of the DPOAE produced is then used to supply the next acoustic tone which will be introduced, meaning that the ear becomes an active participant in the work – it ‘tells’ the piece which frequency to introduce next and the work responds, creating a cascade of tones that mirrors the shape of the cochlear.’ (referring to Jacob Kirkegaard -Labyrinthitis) (try?)
https://www.academia.edu/65712602/Composing_with_Absent_Sound