Chris Ick
Hey there! You found my website! My name is Christopher-Thomas Abaya Ick, but my friends and colleagues usually just call me Chris. I’m a research scientist in the domain of audio, music, and AI. I’m pretty good at building tools, models, and methods around these things. Currently, I’m interested in neural codecs for generative audio modeling, tools and methods for localization from Ambisonics audio data, and a few other fun side projects.
I’m currently looking for full-time employment opportunities, feel free to reach out to me at chris(dot)ick(at)nyu(dot)edu. Here’s my CV!
I’m always happy to chat over zoom, but if you want to meet me in person, I’m mostly in the NYC Brooklyn/Lower Manhattan area.
Recent News
- (June 2025) I started an internship at SonyAI for work on neural codecs for training generative models. I’m really excited about an opportunity to get my hands dirty in a new, fast-paced domain of research.
- (May 2025) I graduated! I successfully defended my dissertation, titled Virtual Soundscapes for Machine Listening. Thanks to everyone at MARL who supported me as I pushed my way through the final stages of my PhD!
- (May 2025) New paper alert! Our work Direction-Aware Neural Acoustic Fields for Few-Shot Interpolation of Ambisonic Impulse Responses was accepted to be presented at Interspeech 2025! Thanks again for all the support I received from my colleagues at Mitsubishi Electric Research Labs (MERL).
- (March 2025)I wrapped up my internship at MERL at the end of March 2025. I had been working with the Speech and Audio group on a part-time/remote basis since June 2024. My work at MERL was accepted to 2 workshops (Audio Imagination at NeurIPS 2024, GenDA at ICASSP 2025) and a conference (see previous bullet).
- (Dec 2024) I presented Spatially-Aware Losses for Enhanced Neural Acoustic Fields at the Audio Imagination Workshop at NeurIPS 2024 (and got some skiing in at Cypress Mountain!)
- (Dec 2024) Our work Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization won the LAP 2024 challenge and was accepted for presentation at ICASSP 2025
Background
I recently graduated from NYU’s Center for Data Science doing research at the Music and Audio Research Lab, advised by Prof. Brian McFee. My dissertation, Virtual Soundscapes for Machine Listening encompasses a lot of the work I did during my PhD around building tools, training models, and doing novel research around how AI models understand space and audio through the lens of spatial acoustics. It was a fun blend of signal processing, physical acoustics, machine learning, and a sprinkling of other subjects.
Before my PhD, I was examining how to estimate the periodicity of oscillations in solar flare light curves as a researcher at NYU’s Department of Physics, where I did my undergraduate degree. I have Professors David Hogg and Kyle Cranmer to thank for encouraging me to branch out into the data sciences, and for giving me the opportunity to develop some projects with them prior to beginning my work at CDS.
Personal
Outside of work, I’m an avid cyclist; I race road and cyclocross for KruisCX, and also enjoy bikepacking, touring, and bike advocacy. I recently biked from Busan to Seoul after attending ICASSP 2024! I’m also into snowboarding, scuba diving, and rock climbing. I love living in Brooklyn, and going to live music/dance parties, ask me about my favorite spots to dance! Finally, I’m into a lot of tech hobbies: I’m a recovering mechanical keyboard addict (let me know if you want to buy some parts), and I’m currently working on a home server for home automation, media hosting, network management, and a few other tasks. I can probably beat you in Super Smash Bros Melee.