EyeSound Vocal
iOS Universel / Utilitaires
EyeSound Vocal converts spoken words and other sounds into data rich images. These are seen in an active spectrogram displayed on the screen of your device. Latency of sound-to-image is less than 100 milliseconds.
The spectrogram can be tuned by the user so that images of most spoken words are recognizable and are unique. If one has memorized a good sized vocabulary of word images, there can be some visual comprehension of spoken conversations just from viewing the app's spectrogram.
Details in the spectrograms indicate inflection and volume.
Applications/uses:
- entertainment - watch as sounds and conversations morph into image streams
- art - create and download spectrogram images that correspond to spoken phrases or other sounds
------------------------------------------------
NOTE: EyeSound Vocal does not capture medical data and does not provide health related measurements, diagnoses or treatment advice. It is not intended for use as a tool for treatment of medical conditions. It is not intended for use as a tool for scientific measurements.
------------------------------------------------
User adjustable parameters in the app include the rate at which sonogram images progress across the screen, resolution, colors, color switching and color mapping to sound intensity, high frequency boost.
Features include auto gain, display of saved images next to active or paused spectrogram, pause/resume for capture of images, save/recall of images, save/recall of setup parameters, default parameter setup that is well suited to the visualization of human speech. full screen mode for a view of nothing but the active, or paused, spectrogram.
How the app works:
Sound is captured by your device's microphone. The app processes the sound data in small sized packets. For each packet the app computes frequencies and intensities. A spectrogram is created by color mapping intensities across frequency bins and then plotting these versus time. With ongoing plotting of packet data as it arrives the spectrogram becomes 'active' - it appears as a movie.
EyeSound - Past, Present, Future:
45 years ago, in 1979, a young chemical engineer fresh out of MIT had just begun working for GE in Niskayuna, NY. In his spare time he built and experimented with breadboarded circuits that incorporated resistors and capacitors, transistors and ICs and lights, microphones and speakers. One day he assembled a device consisting of a row of LEDs tied into a circuit that would sequentially light the LEDs in accordance with an input voltage signal. When he manually swept this device in an arc he saw - very much to his surprise and to his fascination - images of complex waveforms painted across empty air. This simple device was giving a glimpse of all of the information that was normally only heard! The engineer had found a mission: improve upon the device to open to the world a whole new dimension of visual perception.
A team of 2 of us - the engineer and the coder - have over the past several months turned the concept into an app. The initial release is happening now. The 45 year old mission is on the verge of realization.
We anticipate releasing new versions of the EyeSound Vocal app on a regular basis. Users - please send us your suggestions and we'll use them as direction for improvements and use them to expand on our user tips.
We will be taking a look into interfacing EyeSound with two types of peripheral devices (1) AR glasses which could put the active spectrogram front and center in your field of view - so that while viewing the spectrogram you will be more in the same world with the people around you; (2) wearable, directional and better quality microphones to avoid the pick up of background sounds that at times obscure the visualizations.
Quoi de neuf dans la dernière version ?
Version 1.10 has improved resolution in color shades within the spectrogram - so word images are more precise.