Books on library shelves.

Publications:
Read Some of Our Work

Publications:
Read Some of Our Work

2023

 

  • Arthur, C., Lehman, F., & McNamara, J. (In Press). Presenting the SWTC: A Symbolic Corpus of Themes from John Williams’ Star Wars Episodes I-IX. Empirical Musicology Review  

  • Clark, B., & Arthur, C. (In Press). Is melody “dead”?: A large scale analysis of pop music melodies from 1960 through 2019. Empirical Musicology Review. 

  • Condit-Schultz, N., & Clark, B. (In press). Have we sold our souls to the drum machine? A historical analysis of tempo stability in Western music recordings. Musicae Scientiae 

  • Alben, N. & Arthur C. (2023). Pupil Dilation as a Function of Pitch Discrimination Difficulty: A Replication of Kahneman and Beatty, 1967. Attention, Perception & Psychophysics. DOI: https://doi.org/10.3758/s13414-023-02765-7 

  • Arthur, C. & Condit-Schultz, N. (2023). The coordinated corpus of popular musics (CoCoPops): A meta-corpus of melodic and harmonic transcriptions. In Proceedings of the International Society of Music Information Retrieval (ISMIR) conference. Milan, Italy. 

  • Arthur, C., Evans, M., McNamara, J., & Davidenko, N. (2023). Looping in your head: A Corpus of Sung Earworm Fragments. In Proceedings of the Biannual conference for the International Conference for Music Perception and Cognition (ICMPC). Tokyo, Japan 

  • Arthur, C. (2023). Why do songs get “stuck in our heads”? Towards a theory for explaining earworms. Music & Science, 6, 1-15. DOI: https://doi.org/10.1177/2059204323116458

  • Jain, R. & Arthur, C. (2023). An Algorithmic Approach to Automated Symbolic Transcription of Hindustani Vocals. In Proceedings of the 10th International Digital Libraries for Musicology conference (DLfM 2023). Milan, Italy.  

  • McNamara, J. & Arthur, C. (2023). Plugging In: Understanding Player Perceptions of Immersion and Flow in Video Games. In Proceedings of the Biannual conference for the International Conference for Music Perception and Cognition (ICMPC). Tokyo, Japan 

  •  A. Lerch, “Grundlagen digitaler Audiosignale,” in Handbuch der Audiotechnik, 2nd ed., S. Weinzierl, Ed., Berlin, Heidelberg: Springer Berlin Heidelberg, 2023, pp. 1–13. [Online]. Available: https://doi.org/10.1007/978-3-662-60357-4_31-1

  • A. Lerch, “Audioinhaltsanalyse,” in Handbuch der Audiotechnik, 2nd ed., S. Weinzierl, Ed., Berlin, Heidelberg: Springer Berlin Heidelberg, 2023, pp. 1–20. doi: 10.1007/978-3-662-60357-4_8-1.

  • A. Lerch, An Introduction to Audio Content Analysis: Music Information Retrieval Tasks and Applications, 2nd ed. Hoboken, N.J: Wiley-IEEE Press, 2023. Accessed: Nov. 03, 2022. [Online]. Available: https://ieeexplore.ieee.org/servlet/opac?bknumber=9965970

  • P. Knees and A. Lerch, “MILC 2023: 3rd Workshop on Intelligent Music Interfaces for Listening and Creation,” in Companion Proceedings of the 28th International Conference on Intelligent User Interfaces, in IUI ’23 Companion. Sydney: Association for Computing Machinery, 2023, pp. 185–186. doi: 10.1145/3581754.3584164.

  • Y.-N. Hung, C.-H. H. Yang, P.-Y. Chen, and A. Lerch, “Low-Resource Music Genre Classification with Cross-Modal Neural Model Reprogramming,” in Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece: Institute of Electrical and Electronics Engineers (IEEE), 2023. doi: 10.1109/ICASSP49357.2023.10096568.

  • Y. Ding and A. Lerch, “Audio Embeddings as Teachers for Music Classification,” in Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Milan, Italy, 2023. doi: 10.48550/arXiv.2306.17424.

  • H.-H. Chen and A. Lerch, “Music Instrument Classification Reprogrammed,” in Proceedings of the International Conference on Multimedia Modeling (MMM), Bergen, Norway, 2023. [Online]. Available: https://arxiv.org/abs/2211.0837

  • Clester, I. J., & Freeman, J. (2023). Composing with Generative Systems in the Digital Audio Workstation. Joint Proceedings of the ACM IUI Workshops. 

  • Griffith, A. E., Katuka, G. A., Wiggins, J. B., Boyer, K. E., Freeman, J., Magerko, B., & McKlin, T. (2023). Investigating the relationship between dialogue states and partner satisfaction during co-creative learning tasks. International Journal of Artificial Intelligence in Education, 33(3), 543–582. 

  • Koval, J., Hernandez, D., McKlin, T., Edwards, D., Arce-Nazario, R. A., Carroll-Miranda, J., Perez, I. R. Q., Marrero-Solis, L., Freeman, J., Brown, T. L., & others. (2023). Latinx Culture, Music, and Computer Science Remix in a Summer Camp Experience: Results from a Pilot Study. 2023 ASEE Annual Conference & Exposition. 

  • Smith, J. B., & Freeman, J. (2023). Effects of Visual Explanation on Perceived Creative Autonomy in an AI-Based Generative Music System. Companion Proceedings of the 28th International Conference on Intelligent User Interfaces, 25–28. 

  • Smith, J. B., Vinay, A., & Freeman, J. (2023). The Impact of Salient Musical Features in a Hybrid Recommendation System for a Sound Library. Joint Proceedings of the ACM IUI Workshops. 

  • Sankaranarayanan, Hugar, N. Lei, Q., Goel H., Ottolin, T., and Weinberg  G. (2023) “Mixboard – A Co-Creative Mashup Application for Novices, “the International Conference for New Interfaces for Music Expression (NIME 2023) Mexico City, Mexico

  • Ottolin, T.,  Sankaranarayanan R., Hugar, N. Lei, Q. and Weinberg  G. (2023) “ Balancing Musical Co-Creativity: The Case Study of Mixboard, a Mashup Application for Novices”  The 16 Internainal Symposium on Computer Music Multidiscipliary Research, Tokyo, Japan

  • Yang, N, Rogel, A, Weinberg, G “Design of an Expressive Robotic Guitarist” IEEE Robotics and Automation Letters.

 
  • Savery, R. Weinberg, G. (2022) “Robotics: Fast and Curious: A CNN for Ethical Deep Learning Musical Generation, in Artificial Intelligence and Music Ecosystem, ed. By Martin Cancy, Taylor and Francis Group.
  • Freeman, J. (2022). “The History of Music and Computing,” in Introduction to Digital Music with Python, eds. Horn, West, and Roberts. Focal Press.
  • Freeman, J. (2022). “Live Coding Exposition,” in Live Coding: A User's Manual, ed. Alan Blackwell, Emma Cocker, Geoff Cox, Alex McLean, and Thor Magnusson, MIT Press. https://mitp-content-server.mit.edu/books/content/sectbyfn/books_pres_0/13770/oa.pdf
  • Herre, J., Disch, S., Lerch, A., 2022. Quellcodierung, in: Weinzierl, S. (Ed.), Handbuch der Audiotechnik. Springer, Berlin, Heidelberg, pp. 1–23. https://doi.org/10.1007/978-3-662-60357-4_34-1
  • Lerch, A., 2023. An Introduction to Audio Content Analysis: Music Information Retrieval Tasks and Applications, 2nd ed. Wiley-IEEE Press, Hoboken, N.J.
  • Lerch, A., 2022. libACA, pyACA, and ACA-Code: Audio Content Analysis in 3 Languages. Software Impacts 13, 100349. https://doi.org/10.1016/j.simpa.2022.100349
  • McCall, L., Freeman, J., McKlin, T., Lee, T., Magerko, B., and Horn, M. (2023). “Complementary Roles of CS + Music Platforms in Student Learning” in Computer Music Journal (accepted and in press, publication expected in early 2023).
  • Griffith, A., Katuka, G., Wiggins, J., Boyer, K., Freeman, J., Magerko, B., and McKlin, T. (2022). “Investigating the Relationship Between Dialogue States and Partner Satisfaction During Co-Creative Learning Tasks,” in International Journal of Artificial Intelligence in Education, 1-40. http://216.69.174.57/pdf/IJAIED_2022.pdf
  • Guo, W., Hua, Z., Kang, Z., Li, D., Wang, L., Wu, Q., Lerch, A., 2022. Deep Reinforcement Learning for Urban Multi-taxis Cruising Strategy. Neural Comput & Applic. https://doi.org/10.1007/s00521-022-07255-9
  • Hung, Y.-N., Wu, C.-W., Orife, I., Hipple, A., Wolcott, W., Lerch, A., 2022. A Large TV Dataset for Speech and Music Activity Detection. EURASIP Journal on Audio, Speech, and Music Processing 2022, 21. https://doi.org/10.1186/s13636-022-00253-8
  • Li, D., Wang, L., Li, L., Guo, W., Wu, Q., Lerch, A., 2022. A Large-Scale Multiobjective Particle Swarm Optimizer With Enhanced Balance of Convergence and Diversity. IEEE Transactions on Cybernetics 1–12. https://doi.org/10.1109/TCYB.2022.3225341
  • Watcharasupat, K.N., Lee, J., Lerch, A., 2022. Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models. Software Impacts 100222. https://doi.org/10.1016/j.simpa.2022.100222
  • Rogel, Amit, et al. "RoboGroove: Creating Fluid Motion for Dancing Robotic Arms." Proceedings of the 8th International Conference on Movement and Computing. 2022.
  • Rogel, Amit. "Music and Movement Based Dancing for a Non-Anthropomorphic Robot." 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2022.
  • Clester, I. and Freeman, J. (2022). “Alternator: A General-Purpose Generative Music Player,” in Proceedings of the 2022 Web Audio Conference (WAC), Cannes, France. https://zenodo.org/record/6767436
  • Moore, R., Delacoudray, C., Newton, S., Jackson, J., Alemdar, M., Garrett, S., Barbot, H., Freeman, J., Wilson, J., and Grossman, S. (2022). “Your Voice is Power: Integrating Computing, Music, Entrepreneurship, and Social Justice Learning,” in Proceedings of the American Society for Engineering Education 2022 Annual Conference (ASEE), Minneapolis, Minnesota. https://peer.asee.org/your-voice-is-power-integrating-computing-music-entrepreneurship-and-social-justice-learning.pdf
  • Katuka, G., Webber, A., Wiggins, J., Boyer, K., Magerko, B., McKlin, T., and Freeman, J. (2022). “The Relationship between Co-Creative Dialogue and High School Learners’ Satisfaction with their Collaborator in Computational Music Remixing,” in Proceedings of the ACM Conference on Computer-Supported Cooperative Work (CSCW), Article No. 123, pp. 1-24. https://dl.acm.org/doi/abs/10.1145/3512970
  • Chen, H.-H., Lerch, A., 2023. Music Instrument Classification Reprogrammed, in: Proceedings of the International Conference on Multimedia Modeling (MMM). Presented at the MMM, Bergen, Norway.
  • Hung, Y.-N., Lerch, A., 2022a. Feature-informed Latent Space Regularization for Music Source Separation, in: Proceedings of the International Conference on Digital Audio Effects (DAFX). Presented at the DAFx, arXiv, Vienna, Austria. https://doi.org/10.48550/arXiv.2203.09132
  • Hung, Y.-N., Lerch, A., 2022b. Feature-informed Embedding Space Regularization for Audio Classification, in: Proceedings of the European Signal Processing Conference (EUSIPCO). Presented at the EUSIPCO, Belgrade, Serbia. https://doi.org/10.48550/arXiv.2206.04850
  • Kalbag, V., Lerch, A., 2022. Scream Detection in Heavy Metal Music, in: Proceedings of the Sound and Music Computing Conference (SMC). Presented at the SMC, Saint-Etienne. https://doi.org/10.48550/arXiv.2205.05580
  • Ma, A.B., Lerch, A., 2022. Representation Learning for the Automatic Indexing of Sound Effects Libraries, in: Proceedings of the International Society for Music Information Retrieval Conference (ISMIR). Presented at the ISMIR, Bangalore, IN. https://doi.org/10.48550/arXiv.2208.09096
  • Vinay, A., Lerch, A., 2022. Evaluating Generative Audio Systems and their Metrics, in: Proceedings of the International Society for Music Information Retrieval Conference (ISMIR). Presented at the ISMIR, Bangalore, IN. https://doi.org/10.48550/arXiv.2209.00130
  • Claire Arthur and Rhythm Jain, “Predicting Emotionally-Salient Musical Moments: A Corpus Study.” Conference for the Society for Music Perception and Cognition, Portland, OR, 2022
  • Rhythm Jain and Claire Arthur, “A Cross-cultural Examination of Raga perception: Examining the link between Time of Day and Enculturation.” Conference for the Society for Music Perception and Cognition, Portland, OR, 2022
  • John McNamara, Ethan Lindblom, and Claire Arthur, “Going with the Flow: Can Sound Design Keep Players Immersed in Video Games?” Conference for the Society for Music Perception and Cognition, Portland, OR, 2022
 
  • Savery, R., Weinberg, G.; Machine Learning Driven Musical Improvisation for Mechanomorphic Human-Robot Interaction; Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021

  • Savery, R., Weinberg, G.; Robots and emotion: a survey of trends, classifications, and forms of interaction; Advanced Robotics, 2021

  • Savery, R.,  Zahray, L., ; Weinberg, G.; Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative Activities; Frontiers in Robotics and AI 2021

  • Farris, N., Model, B., Savery, R., Weinberg, G.; Musical Prosody-Driven Emotion Classification: Interpreting Vocalists Portrayal of Emotions Through Machine Learning; 18th Sound and Music Computing Conference, 2021

  • Savery, R., Rogel, A.,Weinberg, G.; Emotion Musical Prosody for Robotic Groups and Entitativity; 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) 2021

  • Ram, N., Gummadi, T., Bhethanabotla, T., Savery, R., Weinberg,G.; Say What? Collaborative Pop Lyric Generation Using Multitask Transfer Learning; Proceedings of the 9th International Conference on Human-Agent Interaction, 2021

  • Sankaranarayanan, R., & Weinberg, G.; Design of Hathaani-A Robotic Violinist for Carnatic Music; New Interfaces for Musical Expression (NIME) 2021

  • Yang, N., Sha, R., Sankaranarayanan, R., Sun, Q. and Weinberg, G., Drumming Arm: an Upper-limb Prosthetic System to Restore Grip Control for a Transradial Amputee Drummer; IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 10317-10323

  • Li, Dongyang; Wang, Lei; Lerch, Alexander; Wu, Qidi "An Adaptive Particle Swarm Optimizer with Decoupled Exploration and Exploitation for Large Scale Optimization,” Journal Article Swarm and Evolutionary Computation, 60 , 2021, ISSN: 2210-6502.

  • Seshadri, Pavan; Lerch, Alexander "Improving Music Performance Assessment with Contrastive Learning," In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pp. 8, Online, 2021

  • Pati, Ashis; Lerch, Alexander "Is Disentanglement Enough? On Latent Representations for Controllable Music Generation In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pp. 8, Online, 2021.

  • Smith, J. and Freeman, J. (2021). “Effects of Deep Neural Networks on the Perceived Creative Autonomy of a Generative Musical System,” in Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), virtual conference.

  • Truesdell, E., Smith, J., Mathew, S., Katuka, G., Griffith, A., McKlin, T., Magerko, B., Freeman, J., and Boyer, K. (2021) “Supporting Computational Music Remixing with a Co-Creative Learning Companion,” Proceedings of the 2021 International Conference on Computational Creativity (ICCC), virtual conference.

  • Bullard, C., Kansal, A., and Freeman, J. (2021) “Comparing Chat Methods for Remote Collaborative Live-Coding,” in Proceedings of the 2021 Audio Mostly Conference, virtual conference.

  • Clester, I., and Freeman, J. (2021). “Composing the Network With Streams,” in Proceedings of the 2021 Audio Mostly Conference, virtual conference.

  • McCall, L., and Freeman, J. (2021). “A 3D Graphic Score Space and the Creative Techniques and Performance Practices that Emerge From it,” in Proceedings of the 2021 Audio Mostly Conference, virtual conference.

  • Griffith, A., Katuka, G., Wiggins, J., Boyer, K., Freeman, J., Magerko, B., and McKlin, T. (2021). “Discovering Co-creative Dialogue States During Collaborative Learning,” in Proceedings of the International Conference on Artificial Intelligence in Education (AIED).

  • Moore, R., Newton, S., Alemdar, M., Grossman, S., Freeman, J., Smith, J., and Berry, T. (2021). “Engaging High School Students in Computer Science Through Music Remixing: An EarSketch-based Pilot Competition & Evaluation,” in Proceedings of the American Society for Engineering Education 2021 Annual Conference(ASEE), virtual conference.

  • Wu, Y. and Freeman, J. (2021). “Ripples: An Auditory Augmented Reality iOS Application for the Atlanta Botanical Garden,” in Proceedings of the 2021 Conference on New Interfaces in Musical Expression (NIME), virtual conference.

  • McKlin, T., McCall, L., Lee, T., Magerko, B., Horn, M., and Freeman, J. (2021). “Leveraging Prior Computing and Music Experience for Situational Interest Formation,” in Proceedings of the ACM Special Interest Group on Computer Science Education (SIGCSE), virtual conference.

  • Carter-Enyi, A., Rabinovitch, G., & Condit-Schultz, N. (2021), "Visualizing Intertextual Form with Arc Diagrams: Contour and Schema-based Methods," Proceedings of the Society for Music Information Retrieval

  • Elaine Chew, Psyche Loui, Grace Leslie, Caroline Palmer, Jonathan Berger, Edward W. Large, Nicolò F. Bernardi, Suzanne Hanser, Julian F. Thayer, Michael A. Casey, Pier D. Lambiase. “How Music Can Literally Heal the Heart.” Scientific American, September 18, 2021. (https://www.scientificamerican.com/article/how-music-can-literally-heal-the-heart/)

  • Grace Leslie, “Composing at the Border of Experimental Music and Music Experiment.” In Margulis, L. Et al (eds.) The Science-Music Borderlands: Reckoning with the Past and Imagining the Future. Cambridge: MIT Press, 2021. (In press)

  • Robert Quon, Michael Casey, Edward Camp, Stephen Meisenhelter, Sarah Steimel, Yinchen Song, Markus Testorf, Grace Leslie, Krzysztof Bujarski, Alan Ettinger, Barbara Jobst. Musical components important for the Mozart K448 effect in epilepsy. Scientific Reports 11(1), 2021, 16490.

  • Robert Quon, Grace Leslie, Edward Camp, Stephen Meisenhelter, Sarah Steimel, Yinchen Song, Alan Ettinger, Krzysztof Bujarski KA, Michael Casey, Barbara Jobst. 40-Hz auditory stimulation for intracranial interictal activity: A pilot study. Acta Neurologica Scandinavica 144(2), 2021, p192-201.

  • Grace Leslie. Inner Rhythms: Vessels as a Sustained Brain-Body Performance Practice. Leonardo 54(3), 2021, p325–328. 

  • R. Michael Winters, Bruce N. Walker, and Grace Leslie. Can You Hear My Heartbeat?: Hearing an Expressive Biosignal Elicits Empathy. In CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan.

  • Ashvala Vinay, Alexander Lerch, Grace Leslie. (2021) “Mind the Beat: Detecting Audio Onsets from EEG Recordings of Music Listening.” In International Conference on Acoustics, Speech, & Signal Processing (ICASSP ’21), June 6-11, 2021, Toronto, ON, Canada.

  • Arthur, C., (2021). Vicentino versus Palestrina: A computational investigation of voice leading across changing vocal densities. Journal of New Music Research. Vol. 50(1), 74-101.

  • Light, L., & Arthur, C. (2021). Voice leading in Palestrina’s Masses: A comparison of interval-succession definitions. Proceedings of the First Annual Future Directions of Music Cognition Conference. (http://org.osu.edu/mascats/proceedings/

  • Hu, T., & Arthur, C. (2021). A statistical model for melodic reduction. Proceedings of the First Annual Future Directions of Music Cognition Conference. (http://org.osu.edu/mascats/proceedings/

  • Horn, M., Banerjee, A., West, M., Pinkard, N., Pratt, A., Freeman, J., Magerko, B., and McKlin, T.; TunePad: Engaging Learners at the Intersections of Music and Code.; International Society of the Learning Sciences (ISLS), 2020.

  • Savery, R., Weinberg G.; A Survey of Robots and Emotion: Broad Trends and Models of Emotional Interaction; 29th IEEE International Conference on Robot & Human Interactive Communication.

  • Savery, R., Weinberg, G., Long-Term Interaction and Persistence of Engagement for Musical Interaction using a Genetic Algorithm; 8th International Conference on Human-Agent Interaction, 2020.

  • Zahray, L., Savery, R., Syrkett, L., Weinberg, G.; Robot Gesture Sonification to Enhance Awareness of Robot Status and Enjoyment of Interaction; 29th IEEE International Conference on Robot & Human Interactive Communication, 2020.

  • Savery, R., Zahray, L., Weinberg, G.; Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication; SCRITA 2020 Trust, Acceptance and Social Cues in Human-Robot Interaction (RO-MAN 2020)

  • Huang, J, and Hung, Y. and Pati, A. and Gururani, S. and Lerch, A.; Score-informed Networks for Music Performance Assessment; Proceedings of the International Society for Music Information Retrieval Conference (ISMIR); International Society for Music Information Retrieval (ISMIR); Neural Computing and Applications; 2020

  • Pati, A. and Gururani, S. and Lerch, A.; dMelodies: A Music Dataset for Disentanglement Learning; Proceedings of the International Society for Music Information Retrieval Conference (ISMIR); International Society for Music Information Retrieval (ISMIR); 2020

  • Hung, Y. and Lerch, A.; Multi-Task Learning for Instrument Activation Aware Music Source Separation; Proceedings of the International Society for Music Information Retrieval Conference (ISMIR); International Society for Music Information Retrieval (ISMIR); 2020

  • Savery, R., Zahray, L., Weinberg, G.; A ConvNet for Ethical Robotic Musical Generation and Interaction; In Clancy, Martin (Ed.) Artificial Intelligence & Creative Music Practice, Routledge, 2021.

  • Wanzer, D., McKlin, T., Freeman, J., Magerko, B., Lee, T., Promoting Intentions to Persist in Computing: An Examination of Six Years of the EarSketch Program. Computer Science Education, 2020.

  • Savery, R., Zahray, L., Weinberg, G., Shimon the Rapper: A Real-Time System for Human-Robot Interactive Rap Battles. The 11th International Conference on Computational Creativity, 2020.

  • Bimbraw K., Fox E., Weinberg G., Hammond F., Towards Sonomyography-Based Real-Time Control of Powered Prosthesis Grasp Synergies. The IEEE Engineering in Medicine and Biology Society, 2020.

  • Savery, R., Zahray, L., Weinberg, G., Shimon Sings: Robotic Musicianship Finds its Voice. In Miranda, E. (Ed.) Handbook of Artificial Intelligence for Music, 2020.

  • Yang, N., Savery, R., Sankaranarayanan , R., Zahray L., Weinberg. G., Mechatronics-Driven Musical Expressivity for Robotic Percussionists. New Interfaces for Musical Expression, 2020.

  • Leslie, G., Inner Rhythms: Vessels as a Sustained Brain-Body Performance Practice. Leonardo Music Journal, accepted for publication in 2020.

 

  • Condit-Schultz, N., Arthur, C., humdrumR: A New Take on an Old Approach to Computational Musicology. In Proceedings of the International Society of Music Information Retrieval (ISMIR) Conference. Delft, Netherlands, 2019.

  • Napolés, N., Arthur, C., Key-Finding Based on a Hidden Markov Model and Key Profiles. In 6th International Digital Libraries for Musicology Workshop (DLfM 2019). Delft, Netherlands, 2019.

  • Arthur, C., Bringing more information to music informatics: Combining tools, data, and best practices from cognition and MIR. 7th seminar on Cognitively Based Music Informatics (CogMIR), Brooklyn College, New York, keynote presentation, 2019.

  • Clark, B., Arthur, C., Alternative measures: A musicologist workbench for popular music. In I. Barbancho, L. J. Tardon, A. Peinado, A. M. Barbancho (Eds)., Proceedings of the 16th Sound Music Computing (SMC) Conference (pp.407-414). Malaga, Spain, 2019.

  • McCoy, E., Greene, J., Henson, J., Pinder, J., Brown, J., Arthur, C., The chordinator: An interactive music learning device. In I. Barbancho, L. J. Tardon, A. Peinado, A. M. Barbancho (Eds)., Proceedings of the 16th Sound Music Computing (SMC) Conference (pp.297-298). Malaga, Spain, 2019.

  • Smith, J., Jacob, M., Freeman, J., Magerko, B., McKlin, T., Combining Collaborative and Content Filtering in a Recommendation System for a Web-Based DAW. In Proceedings of the 2019 Web Audio Conference (WAC 2019), Trondheim, Norway, 2019.

  • Bin, A., Bui, C., Genchel, B., Sali, K., Magerko, B., Freeman, J., From the museum to the browser: Translating a music-driven exhibit from physical space to a web app. In Proceedings of the 2019 Web Audio Conference (WAC 2019), Trondheim, Norway, 2019.

  • McKlin, T., Lee, T., Wanzer, D., Magerko, B., Edwards, D., Grossman, S., Bryans, E., Freeman, J., Accounting for Pedagogical Content Knowledge in a Theory of Change Analysis. In Proceedings of the 2019 ACM Conference on International Computing Education Research (ICER 2019), Toronto, Canada, 2019.

  • Smith, J., Weeks, D., Jacob, M., Freeman, J., Magerko, B., Towards a Hybrid Recommendation System for a Sound Library. In Joint Proceedings of the ACM IUI 2019 Workshops: Intelligent Music Interfaces for Listening and Creation (MILC), Los Angeles, California, 2019.

  • Savery, R., Genchel, B., Smith, J., Jones. M., Harriet Padberg: Computer-Composed Canon and Free-Fugue Renascence. Society for Music Theory 2019.

  • Savery, R., Rose, R., Weinberg, G.,Finding Shimi’s Voice: Fostering Human-Robot Communication With Music And a NVIDIA Jetson TX2. Linux Audio Conference 2019.

  • Savery, R., Rose, R., Weinberg, G., Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture. 28th IEEE International Conference on Robot and Human Interactive Communication, 2019.

  • Savery, R., Ayyagari,  M., May, K.,  Walker, B., Soccer Sonification: Enhancing Viewer Experience. 25th International Conference on Auditory Display, 2019.

  • Savery, R., Genchel, B., Smith, J., Caulkins, A. Jones., Savery, A., Learning from History: Recreating and Repurposing Sister Harriet Padberg’s Computer Composed Canon and Free Fugue. New Interfaces for Musical Expression, 2019.

  • Leslie, G., Quon, R., Jobst, B.  Engineering Music to Mitigate Epilepsy. American Epilepsy Society Meeting, Baltimore, poster presentation, December 2019.

  • Winters, M., Walker, B., Leslie, G., Heartbeat entrainment: a physiological role for empathy in the act of music listening? Society for Music Perception and Cognition, New York, 2019.

  • Leslie, G., Ghandeharioun, A., Zhou, D., Picard, R., Engineering Music to Slow Breathing and Invite Relaxed Physiology. 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, United Kingdom, 2019.

  • Leslie, G., Vessels: Being as Material. In Boucher, M. Et al (eds.) Being Material. Cambridge: MIT Press, 2019.

  • Huang, J., Lerch, A., Automatic Assessment of Sight-Reading Exercises. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft, 2019.

  • Lerch, A., Arthur, C., Pati, A., Gururani, S., Music Performance Analysis: A Survey. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft, 2019.

  • Pati, A., Lerch, A., Hadjeres, G.,Learning to Traverse Latent Spaces for Musical Score Inpainting. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft, 2019.

  • Gururani, S., Sharma, M., Lerch, A., An Attention Mechanism for Music Instrument Recognition. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft, 2019.

  • Genchel, B., Pati, A., Lerch, A. Explicitly Conditioned Melody Generation: A Case Study with Interdependent RNNs. In Proceedings of the International Workshop on Musical Metacreation (MuMe), Charlotte, 2019.

  • Qin, Y., and Lerch, A. Tuning Frequency Dependency in Music Classification. In Proceedings of the International Conference on Acoustics Speech and Signal Processing (ICASSP), Brighton, 2019.

  • Gururani, S., Lerch, A., and Bretan, M. A Comparison of Music Input Domains for Self-Supervised Feature Learning. In Proceedings of the ICML Machine Learning for Music Discovery Workshop (ML4MD), Extended Abstract, Long Beach, 2019.

  • Pati, A., Lerch, A. Latent Space Regularization for Explicit Control of Musical Attributes. In Proceedings of the ICML Machine Learning for Music Discovery Workshop (ML4MD), Extended Abstract, Long Beach, 2019.

  • Guan, H.; Lerch, A. Learning Strategies for Voice Disorder Detection. In Proceedings of the International Conference on Semantic Computing (ICSC), Newport Beach, 2019.

  • Guan, H.; Lerch, A., Evaluation of Feature Learning Methods for Voice Disorder Detection, International Journal of Semantic Computing (IJSC), 13 (4), pp. 453–470, 2019, ISSN: 1793-351X.

  • Swaminathan, R. V. and Lerch, A. Improving Singing Voice Separation Using Attribute-Aware Deep Network. In Proceedings of the International Workshop on Multilayer Music Representation and Processing (MMRP). Milan, Italy, 2019.

  • Xambo, A., Lerch, A., and Freeman, J. Music Information Retrieval in Live Coding: A Theoretical Framework, In Computer Music Journal 42 (4), pp. 9–25, 2019.

  • Freeman, J., Magerko, B., Edwards, D., McKlin, T., Lee, T., and Moore, R. (2019). EarSketch: Engaging Broad Populations in Computing Through Music, in Communications of the Association for Computing Machinery (CACM). (Accepted pending minor revisions.)

  • McKlin, T., Wanzer, D., Lee, T., Magerko, B., Edwards, D., Grossman, S., and Freeman, J. (2019). Implementing EarSketch: Connecting Classroom Implementation to Student Outcomes, n Proceedings of the ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE 2019), Minneapolis, Minnesota (accepted and in press).

  • Wanzer, D., McKlin, T., Edwards, D., Freeman, J., and Magerko, B. (2019). Assessing the Attitudes Towards Computing Scale: A Survey Validation Study, in Proceedings of the ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE 2019), Minneapolis, Minnesota (accepted and in press).

 

  • Lerch, A., Music Information Retrieval; In Weinzierl, S. (ed.), Handbuch der Systematischen Musikwissenschaft (vol 5), Laaber, 2014

  • Bretan, M., Hoffman, G., Weinberg, G. (2014). “Emotionally Expressive Dynamic Physical Behaviors in Robots” In International Journal of Human-Computer Studies. (In Review)

  • Clark, J., Bretan, M., Weinberg, G. (2014). “Query By Dance,” Submitted to the Proceedings of the International Conference on Multimedia and Human-Robot Interaction (MHCI 2014), London, UK. (Submitted)

  • Bretan, M., Weinberg, G. “Chronicles of a Robotic Musical Companion,” submitted to Proceedings of the New Interfaces for Musical Expression Conference (NIME 2014), London, UK, 2014.

  • Freeman, J., Magerko, B., McKlin, T., Reilly, M., Permar, J., Summers, C., Fruchter, E. (2014) “Engaging Underrepresented Groups in High School Introductory Computing through Computational Remixing with EarSketch,” in Proceedings of the ACM SIGCSE Technical Symposium on Computer Science Education, Atlanta, Georgia.

  • Coler, H. v.; Lerch, A., CMMSD: A Data Set for Note-Level Segmentation of Monophonic Music, Proceedings of the AES 53rd International Conference on Semantic Audio, London, UK, January, 2014

  • Kraft, S.; Lerch, A., The Tonalness Spectrum: Feature-Based Estimation of Tonal Components, Proceedings of the 16th International Conference on Digital Audio Effects (DAFx), Maynooth, Ireland, September 2-5, 2013

  • Magerko, B., Freeman, J., McKlin, T., McCoid, S., Jenkins, T., and Livingston, E. (2013). “Tackling Engagement in Computing with Computational Music Remixing,” in Proceedings of the ACM SIGCSE Technical Symposium on Computer Science Education, Denver, Colorado.

  • Lee,S., and Freeman,J. "Echobo: A Mobile Music Instrument Designed for Audience to Play," in Proceedings of the New Interfaces for Musical Expression conference (NIME 2013), Seoul, Korea.

  • Lee,S.,and Freeman,J. (2013). "Real-time Music Notation in Mixed Laptop-Acoustic Ensembles," in Computer Music Journal (forthcoming).

  • Weitzner, N., Freeman, J., Chen, Y., and Garrett, S. (2013). “massMobile: Towards a Flexible Framework for Large-Scale Participatory Collaborations in Live Performances,” in Organised Sound, Cambridge University Press, 18:1.

  • Lee, S., and Freeman., J. "echobo: A Mobile Music Instrument Designed for Audience to Play," in Proceedings of the New Interfaces for Musical Expression Conference (NIME 2013), Seoul, Korea.

  • McCoid, S., Freeman, J., Magerko, B., Michaud, C., Jenkins, T., Mcklin, T., and Kan, H. (2013). “EarSketch: An Integrated Approach to Teaching Introductory Computer Music,” in Organised Sound, Cambridge University Press, 18:2.

  • Cicconet, M., Bretan, M., and Weinberg, G. “Human-Robot Percussion Ensemble: Anticipation on the Basis of Visual Cues,” IEEE Robotics and Automation, Vol. 20:4. pp. 105-110, 2013.

  • Lee, S., Srinivasamurthy, A., Tronel, G., Shen, W. (2012). “Tok!: A Collaborative Acoustic Instrument using Mobile Phones,” in Proceedings of the New Interfaces for Musical Expression Conference (NIME 2012), Ann Arbor, Michigan.

  • Subramanian, S., Freeman, J., and McCoid, S. (2012). “LOLbot: Machine Musicianship in Laptop Ensembles,” in Proceedings of the New Interfaces for Musical Expression Conference (NIME 2012), Ann Arbor, Michigan.

  • Lerch, A., An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics; Wiley-IEEE Press, Hoboken, 2012, ISBN: 9781118266823

  • Bretan, M., Weinberg, G., and Freeman, J. "Sonification for the Art Installation Drawn Together," in Proceedings of the International Conference on Auditory Display, Atlanta, Georgia. (2012)

  • Freeman, J., DiSalvo, C., Nitsche, M., and Garrett, S. (2012). “Rediscovering the City with UrbanRemix,” in Leonardo, MIT Press, 45:5, pp. 478-479

  • Lee,S., Freeman,J., Colella,A., Yao,S., and Van Troyer,A. (2012). "Evaluating Collaborative Laptop Improvisation With LOLC," in Proceedings of the Symposium on Laptop Ensembles and Orchestras (SLEO2012), Baton Rouge, Louisiana.

  • Sun, S., Malikarjuna, T., Weinberg. G. “Effect of Visual Cues in Synchronization of Rhythmic Patterns,” International Conference of Music Perception and Cognition (ICMPC 2012), Thessaloniki, Greece, 2012.

  • Albin, A., Weinberg, G., Egerstedt, M. “Musical Abstractions in Distributed Multi-Robot Systems,” IROS, IEEE/Robotics Society of Japan International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 2012.

  • Weitzner, N., Freeman, J., Garrett, S., and Chen, Y. (2012). “massMobile – an Audience Participation Framework,” in Proceedings of the New Interfaces for Musical Expression Conference (NIME 2012), Ann Arbor, Michigan.

  • Bretan, M., Cicconet, M., Nikolaidis, R., and Weinberg. "Developing and Composing for a Robotic Musician Using Different Modes of Interaction" In Proceedings of the 2012 International Computer Music Conference (ICMC 12), Ljubljana, Slovenia. (2012)

  • Cicconet, M., Bretan, M., and Weinberg, G. "Visual cues-based anticipation for percussionist-robot interaction," In HRI 2012, 7th ACM/IEEE International Conference on Human-Robot Interaction, Boston, Massachusetts. (2012)

  • Freeman, J. (2012). “Georgia Tech Center for Music Technology,” in The Grove Dictionary of Musical Instruments, Oxford University Press.

  • Lee, S., Freeman, J., and Colella, A. (2012). “Real-Time Music Notation, Collaborative Improvisation, and Laptop Ensembles,” in Proceedings of the New Interfaces for Musical Expression Conference (NIME 2012), Ann Arbor, Michigan.

  • Nikolaidis, R., Weinberg G. Generative Musical Tension Modeling and its Application in Dynamic Sonification, Computer Music Journal, Vol. 36:1. pp. 55-64, 2012.

  • Ness, S. R.; Lerch, A.; Tzanetakis, G., Strategies for Orca Call Retrieval to Support Collaborative Annotation of a Large Archive; IEEE International Workshop on Multimedia Signal Processing (MMSP 2011), Hangzhou, China, October 17–19, 2011

  • Hoffman, G., Weinberg G. “Interactive Improvisation with a Robotic Marimba Player,” journal Autonomous Robots, Vol. 31, pp. 133-15, Springer Press. (2011)

  • Lerch, A., Software-gestuetzte Merkmalsextraktion für die musikalische Auffuehrungsanalyse; In: Loesch, H. von; Weinzierl, S. (eds.): Gemessene Interpretation - Computergestuetzte Auffuehrungsanalyse im Kreuzverhoer der Disziplinen, Schott Mainz, 2011, pp. 205-212, ISBN: 978-3795707712

  • Weinberg G. (2011), “Gesture-based Human-Robot Jazz Improvisation” Extended Abstract in the Proceedings of the International Conference of Machine Learning (ICML 11), Seattle, USA.

  • Kirchhoff, H.; Lerch, A., Evaluation of Features for Audio-to-Audio Alignment; Journal of New Music Research, Vol.40 No.1, 2011, pp. 27-41, doi = 10.1080/09298215.2010.529917

  • Freeman, J., DiSalvo, C., Nitsche, M., and Garrett, S. (2011). “Soundscape Composition and Field Recording as a Platform for Collaborative Creativity” in Organised Sound, Cambridge University Press, 16:3.

  • Ness, S. R.; Lerch, A.; Tzanetakis, G., Strategies for Orca Call Retrieval to Support Collaborative Annotation of a Large Archive; IEEE International Workshop on Multimedia Signal Processing (MMSP 2011), Hangzhou, China, October 17–19, 2011

  • Hoffman, G., Weinberg G. (2011), “Interactive Improvisation with a Robotic Marimba Player” Journal Autonomous Robots, Vol. 31. Springer Press.

  • Lerch, A., Software-gestuetzte Merkmalsextraktion für die musikalische Auffuehrungsanalyse; In: Loesch, H. von; Weinzierl, S. (eds.): Gemessene Interpretation - Computergestuetzte Auffuehrungsanalyse im Kreuzverhoer der Disziplinen, Schott Mainz, 2011, pp. 205-212, ISBN: 978-3795707712

  • Weinberg G. (2011), “Gesture-based Human-Robot Jazz Improvisation” Extended Abstract in the Proceedings of the International Conference of Machine Learning (ICML 11), Seattle, USA.

  • Kirchhoff, H.; Lerch, A., Evaluation of Features for Audio-to-Audio Alignment; Journal of New Music Research, Vol.40 No.1, 2011, pp. 27-41, doi = 10.1080/09298215.2010.529917

  • Freeman, J., DiSalvo, C., Nitsche, M., and Garrett, S. (2011). “Soundscape Composition and Field Recording as a Platform for Collaborative Creativity” in Organised Sound, Cambridge University Press, 16:3.

  • S. Şentürk and P. Chordia. “Modeling Melodic Improvisation in Turkish Folk Music Using Variable-length Markov Models” in Proceedings of International Conference on Music Information Retrieval, pp. 269-274, 2011.

  • Freeman, J. and Van Troyer, A. (2011). “Collaborative Textual Improvisation in a Laptop Ensemble” in Computer Music Journal, MIT Press, 35:2, pp. 8-21.

  • Albin, A., Senturk, S. Van Troyer, A., Blosser, B., Jan. O, Weinberg. G. “Beatscape, a mixed virtual-physical environment for musical ensembles,” International Conference on New Instruments for Music Expression (NIME2011), Oslo, Finland, May 30-June 1, 2011.

 

  • Hoffman, G. and Weinberg, G. “Gesture-based Human-Robot Jazz Improvisation,” in Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 10), Anchorage, AK. (2010)

  • Huang, K., Starner, T., Do, E., Weinberg, G., Kohlsdorf, D, Ahlrichs, C. and Leibrandt, R. “Mobile Music Touch: Mobile Tactile Stimulation For Passive Learning” in Proceedings of International ACM Computer Human Interaction Conference (CHI 10), Atlanta, GA.

  • Alex Rae and Parag Chordia. “Tabla Gyan: An Artificial Tabla Improviser” In Proc. of the First International Conference on Computational Creativity (icccx), 2010.

  • Aida Austin, Elliot Moore, Parag Chordia and Udit Gupta. “Characterization of Movie Genre Based on Music Score” In Proc. of the 35th IEEE Conference of Acoustics, Speech, and Signal Processing, 2010.

  • Assaf Talmudi, Aaron Albin and Parag Chordia. “Can a robot get smarter by listening to itself? Musical memory as an extended auditory-neural-motor loop” In IROS workshop on Robots and Musical Expressions, 2010.

  • Parag Chordia, Avinash Sastry, Trishul Mallikarjuna and Aaron Albin. “Multiple viewpoints modeling of tablasequences” In Proceedings of International Conference on Music Information Retrieval, 2010.

  • Parag Chordia, Avinash Sastry and Aaron Albin. “Evaluating multiple viewpoint models of tabla” In ACM Multimedia workshop of Music and Machine Learning, 2010.

  • J. Freeman and A. Colella. “Tools for Real-time Notation” Contemporary Music Review, 2010.

  • J. Freeman. “Web-based Collaboration, Live Musical Performance, and Open-form Scores” International Journal of Performance Arts and Digital Media, Vol. 6, No. 2, 2010.

  • Hoffman, G., and Weinberg G. “Synchronization in Human-Robot Musicianship,” The 19th International Symposium on Robot and Human Interactive Communication (RO-MAN 10), Viarggio, Italy. (2010)

  • J. Freeman and M. Godfrey. “Creative Collaboration Between Audiences and Musicians in Flock” Digital Creativity, Vol. 20, No. 4, 2010.

  • Nikolaidis, R., and Weinberg G. “Playing with the Masters: A Model for Interaction Between Robots and Music Novices,” 19th International Symposium on Robotics in Music and Art (RO-MAN 10), Viarggio, Italy. (2010).

  • Weinberg, G., Nikolaidis, R., and Mallikurjuna, T. “A Survey of Recent Interactive Compositions for Shimon – The Perceptual and Improvisational Robotic Marimba Player,” the International Conference on Intelligent Robots and Systems (IROS 2010), Taipei, Taiwan. (2010).

  • J. Freeman. "Compose Your Own, Part 2" The New York Times Online, May 24, 2010.

  • J. Freeman. "Compose Your Own" The New York Times Online, April 23, 2010.

  • Huang, K., Starner, T., Do, E., Weinberg, G., Kohlsdorf, D, Ahlrichs, C. and Leibrandt, R. “Mobile Music Touch: Mobile Tactile Stimulation For Passive Learning,” International ACM Computer-Human Interaction Conference (CHI 10), Atlanta, GA. (2010).

  • Wiesener, C.; Flohrer, T.; Lerch, A.; Weinzierl, S., Adaptive Noise Reduction for Real-Time Applications; Proc. of the 128th AES Convention (Preprint #8048), London, UK, May 22-25, 2010

  • J. Freeman. "DIY Scores" Symphony: The Magazine of the League of American Orchestras, September/October 2010.

  • Weinberg, G., Godfrey, M., Beck, A. (2010) “ZOOZbeat – Mobile Music Recreation” in Extended AbstractsProceedings of International ACM Computer Human Interaction Conference (CHI 10), Atlanta, GA.

  • Hoffman, G., Weinberg, G. “Shimon: An Interactive Improvisational Robotic Marimba Player,” International ACM Computer-Human Interaction Conference (CHI 10), Atlanta, GA. (2010).

  • Weinberg, G., Mallikarjuna, T., Ramen, A. “Interactive Jamming with Shimon: A Social Robotic Musician,” Proceedings of the ACM/IEEE International Conference on Human Robot Interaction, (HRI 2009) San Diego, CA, pp. 233-234. (2009)

  • Freeman, J. (2009, March 4). “Giving Your GWT Application a Voice” in Google Web Toolkit Blog (official Google developer blog). Available from http://googlewebtoolkit.blogspot.com/2009/03/giving-your-gwt-application-voice.html.

  • Parag Chordia, Jagadeeswaran Jayaprakash and Alex Rae. “Automatic Carnatic Raag Classification” Journal of the Sangeet Research Academy (Ninaad), 2009.

  • Parag Chordia and Alex Rae. “Using source separation to improve tempo detection” In Proceedings of International Conference on Music Information Retrieval, 2009.

  • Parag Chordia and Brain Blosser. “What Makes Ragas Sad?” Abstract in In Proc. of the 2009 Society for Music Perception and Cognition (SMPC), 2009.

  • Weinberg, G., Blosser B., Mallikarjuna, T., Ramen “Human-Robot Interactive Music in the Context of a Live Jam Session,” in the Proceedings of International Conference on New Instruments for Music Expression (NIME 09), Pittsburgh, PA, pp. 70-73. (2009)

  • Weinberg, G., Beck, A., Godfrey M. “ZooZBeat: a Gesture-based Mobile Music Studio,” Proceedings of International Conference on New Instruments for Music Expression (NIME 09), Pittsburgh, PA. (2009)

  • Lerch, A.Software-Based Extraction of Objective Parameters from Music Performances; Grin Verlag, München, 2009, ISBN: 978-3640294961

  • Weinberg, G., Blosser B. “A Leader-Follower Turn-taking Model Incorporating Beat Detection in Musical Human-Robot Interaction” in the Proceedings of the ACM/IEEE International Conference on Human Robot Interaction, (HRI 2009) San Diego, CA. (2009)

  • Parag Chordia, Mark Godfrey, Alex Rae. “Extending Content-Based Recommendation: The Case of Indian Classical Music” In Proc. of the 8th International Conference on Music Information Retrieval (ISMIR).

  • Weinberg, G. (2008) “Bluetaps – Transforming Cell Phones into Expressive and Gestural Musical Instruments” the Proceedings of International Conference on Intelligent Technologies for interactive entertainment (INTERTAIN 08), Cancun, Mexico.

  • Freeman, J. (2008). “Thoughts Around Terry Riley’s Chanting the Light of Foresight” in Open Space, 10, p. 143-149.

  • Weinberg, G. (2008) “The Beatbug – Evolution of a Musical Controller”, Digital Creativity, Taylor and Francis Press.

  • Weinberg G., Godfrey M., Rea, A., Rhodes, J. (2008) “A Real-Time Genetic Algorithm In Human-Robot Musical Improvisation”, Lecture Notes in Computer Science, Springer Press.

  • Weinberg, G. (2008) “Extending the Musical Experience – From the Digital to the Physical and Back”, in Seifert W., Hyun Kim J. and Moore A. (Eds.) Paradoxes of Interactivity – Perspectives for Media Theory, Human-Computer Interaction, and Artistic Investigations. Bielefeld, Germany: Transcript Verlag Press.

  • Weinberg, G. “The Music Box”, in Turkle S. (Ed.) Objects in Mind: Falling for Science, Technology and Design. Cambridge MA: MIT Press.

  • Freeman, J. “Graph Theory: Linking Online Musical Exploration to Concert Hall Performance” Leonardo 41/1, 2008.

  • Freeman, J. “Glimmer: Creating New Connections” Transdisciplinary Digital Art: Sound, Vision and the New Screen. Communications in Computer and Communication Science. Springer, 2008.

  • J. Freeman. “Technology, Real-time Notation, and Audience Participation in Flock” Proceedings of the International Computer Music Conference (Belfast), 2008.

  • Freeman, J. “Extreme Sight-Reading, Mediated Expression, and Audience Participation: Real-Time Music Notation in Live Performance” Computer Music Journal Vol. 32, No. 3, 2008.

  • Freeman, J. “Collaborative Creation, Live Performance, and Flock” Leonardo Music Journal Vol. 18, 2008.

  • Parag Chordia, Alex Rae. “A large-scale survey of emotion in raag music” In Proceedings of International Conference of Music Perception and Cognition, 2008.

  • Freeman, J. “Graph Theory: Linking Online Musical Exploration to Concert Hall Performance” Leonardo 41/1, 2008.

  • Freeman, J. “Glimmer: Creating New Connections” Transdisciplinary Digital Art: Sound, Vision and the New Screen. Communications in Computer and Communication Science. Springer, 2008.

  • J. Freeman. “Technology, Real-time Notation, and Audience Participation in Flock” Proceedings of the International Computer Music Conference (Belfast), 2008.

  • Freeman, J. “Extreme Sight-Reading, Mediated Expression, and Audience Participation: Real-Time Music Notation in Live Performance” Computer Music Journal Vol. 32, No. 3, 2008.

 
  • Freeman, J. “Graph Theory: Linking Online Musical Creativity to Concert Hall Performance” Proceedings of the 6th ACM Creativity and Cognition Conference (Washington, DC), 2007.

  • Chordia, P. "A System for the Analysis and Representation of Bandishes and Gats Using Humdrum Syntax" In Proc. of the 2007 Frontiers of Research in Speech and Music Conference (FRSM 2007).

  • Chordia, P., Rae, A. “Relating Judgments of Dissonance to Sensory Consonance in the Context of Indian Classical Music” Abstract In Proc. of the 2007 Society for Music Perception and Cognition (SMPC).

  • Chordia, P., Rae, A. “Understanding Emotion in Raag: An Empirical Survey of Listener Responses” In Proc. of the 2007 International Computer Music Conference (ICMC).

  • Chordia, P., Rae, A. “Automatic Raag Classification Using Pitch-class and Pitch-class Dyad Distributions” InProc. of the 7th International Conference on Music Information Retrieval (ISMIR).

  • Chordia, P., Rae, A. “Modeling and Visualizing Tonality in N. Indian Classical Music” In Proc. of the 2007 Neural Information Processing Systems Foundation (NIPS). (submitted)

  • Weinberg G., Driscoll, S. (2007) “The Interactive Robotic Percussionist: New Developments In Form, Mechanics, Perception And Interaction Design”, Proceeding of the ACM/IEEE international conference on Human-robot interaction (HRI 2007), Arlington, VA. pp. 97-104.

  • Weinberg, G., Driscoll S. (2007) “Introducing Pitch, Melody and Harmony into Robotic Musicianship”, Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2007), New York City, NY, pp. 228-233.

  • Weinberg, G. “The Design of a Perceptual and Improvisational Robotic Marimba Player”, Proceedings of IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2007), Jeju, Korea, pp. 132-137.

  • Weinberg, G., Driscoll S. “The Robotic Percussionist – Bringing Interactive Computer Music into the Physical World”, in Sick. A. and Lishca C. (Eds.) Machines as Agency – Artistic perspectives. Bielefeld, Germany: Transcript Verlag Press, pp. 66-82.

  • Weinberg, G., Godfrey, M., Rae, A., Rhodes, J. “A Real-Time Genetic Algorithm in Human-Robot Musical Improvisation”, Proceedings of International Computer Music Conference (ICMC 2007), Copenhagen, Denmark,pp. 192-195.

  • Weinberg, G. (2007) “Musical Interactions Between Humans and Machines” in Sarkar N. (Ed.) Human-Robot Interaction. Vienna Austria: Ars Press. pp. 423-444.

  • Freeman, J. “Composer, Performer, Listener” In Komponieren in der Gegenwart: Texte der 42. InternationalenFerienkurse für Neue Musik 2004.ed. Jörn Peter Hiekel. Saarbrücken, Germany: Pfau Verlag, 2007.

  • Freeman, J., Cerar, M. “Graph Theory and the Virtual Composer Residency Project” Proceedings of the Spark Festival of Electronic Music and Art (Minneapolis), 2007.

  • Freeman, J. “Graph Theory: Interfacing Audiences Into the Composition Process” Proceedings of the New Interfaces for Musical Expression Conference (New York), 2007.

  • Chordia, P. "Automatic transcription of solo tabla music". Ph.D. dissertation, Stanford University.

  • Chordia, P. “Automatic transcription and representation of solo tabla music” Computing in Musicology. Vol. 13.

 
  • Chordia, P. “Automatic raag classification of pitch-tracked performances using pitch-class and pitch-class dyad distributions” In Proc. of the 2006 International Computer Music Conference (ICMC 2006), New Orleans, LA.

  • Weinberg G., Driscoll S. “Robot-Human Interaction with an Anthropomorphic Percussionist” Proceedings of International ACM Computer Human Interaction Conference (CHI 2006). Montréal, Canada, pp. 1229 – 1232

  • Weinberg G., Thatcher T. “Interactive Sonification of Neural Activity” Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2006), Paris, France

  • Weinberg G., Driscoll, S., Thatcher T. “Jam ’aa – A Percussion Ensemble for Human and Robotic Players” ACM International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2006), Boston, MA.

  • Weinberg G., Freema, J., Chordia P., Clark F., Moore C., Driscoll S., and Thatcher T. “Georgia Tech Music Technology Group – Studio Report” Proceedings of International Computer Music Conference (ICMC 2006), New Orleans, LA

  • Weinberg G., Thatcher T. “Interactive Sonification: Aesthetics, Functionality and Performance” Leonardo Music Journal 16, MIT Press.

  • Weinberg G., Driscoll S. “Towards Robotic Musicianship” Computer Music Journal 30:4, MIT Press, pp. 28-45

  • Thatcher T., Jimison D., Goetzinger J. “Sequencer404: A Networked Telephonic Composer” Mobile Music Workshop 2006. Brighton, UK.

  • Thatcher T., Jimison D., Goetzinger J., Freeman J., Weinberg G. “Mobile Networked Music Demonstration:Sequencer404″ Proceedings of International Computer Music Conference (ICMC 2006), New Orleans, LA.

  • Freeman, J. “Glimmer: Creating New Connections” Digital Art Weeks (Zurich), 2006.

  • Freeman, J. “Fast Generation of Audio Signatures to Describe iTunes Libraries” Journal of New Music Research, Vol. 35, No. 1, 2006.

  • Chordia P. “Segmentation and recognition of tabla strokes” In Proc. of the 6th International Conference on Music Information Retrieval (ISMIR), pages 107-114.

Questions?

 
If you can't find the information you were looking for, we'll get you to the right place.
Contact Us