Marina Ananias

BA in Computer Science and Music, Concentration in Neuroscience

I design expressive, multisensory musical systems that investigate how people form creative agency when interaction is uncertain, exploratory, or still forming. My work combines instrument design, sensing systems, AI interaction, and large-scale platforms to study how interfaces can listen to tentative human input and respond to human needs in ways that expand creativity and support learning, wellbeing, and growth.
Movement I — Interfaces

Where expression begins

Piece 1 — MIDI Songwriter

How can tactile, non-symbolic interaction support intuitive understanding of harmony for novice musicians?

Concept: The MIDI Songwriter is a hardware instrument designed to make harmony intuitive. Rather than requiring theoretical knowledge, the instrument encodes harmonic relationships into its physical affordances, allowing non-expert musicians to create coherent musical phrases even when their gestures are still exploratory or imprecise. This project explores how physical affordances can preserve agency when users do not yet have explicit harmonic intent.

Method & Implementation: Designed and fabricated custom enclosure and tactile interface; developed embedded firmware for real-time sensor processing and MIDI communication with visual feedback through RGB LEDs to guide user interaction.

C++ Arduino Analog Sensors MIDI-over-USB Hardware & Firmware Design
MIDI Songwriter assembled device glowing blue on a desk
Final assembled hardware prototype
Laser-cut enclosure layout for the MIDI Songwriter
Laser-cut enclosure design
Internal wiring diagram of the MIDI Songwriter created in Fritzing
Wiring diagram connecting sensors, LEDs, and the microcontroller
Relevance
  • Opera of the Future: Hyperinstrument-inspired beginner-centered expressivity.
  • Responsive Environments: Physical sensing as a primary musical interface.

Piece 2 — Browser-Based Keyboard

Can minimal visual-motor interaction support early-stage musical exploration when users lack confidence, precision, or prior knowledge?
Browser-based keyboard interface mapping cursor movement to pitch and timbre
Browser-based interface used in classroom settings to teach basic harmony and melodic patterns

Concept: This browser-based keyboard maps cursor movement to pitch and harmonic structure, enabling users to explore melody and harmony using only a mouse or trackpad.

Method & Implementation: Implemented cursor-to-sound mappings in Scheme; Designed interface constraints to reduce cognitive overload for beginners.

Scheme (Scamper) HTML5 JavaScript Sound mapping
Relevance
  • Opera of the Future: Hyperscore-inspired democratization of composition.
  • Multisensory Intelligence: Early example of multimodal translation (visual → auditory) for creative expression.

If interfaces give form to expression, systems give it scale. Shaping how communities learn, connect, and create meaning together.

Movement II — Systems

Where technology becomes infrastructure

Piece 3 — BRASA Digital Ecosystem

How can digital platforms support belonging, opportunity, and collective imagination in large, distributed communities?

Concept: As Director of Technology and later COO, I led the redesign of BRASA's digital ecosystem, transforming fragmented tools into a unified system supporting conferences, mentorship, scholarships, and community engagement for over 15,000 students. Beyond scale, this work examined how digital systems can support participation when users are navigating uncertainty, new institutions, unfamiliar norms, and unequal access to information. The same design question recurs here: how systems can invite agency without assuming confidence or prior expertise.

Method & Implementation: Designed and deployed full-stack website, portal, app, and database systems; Led cross-functional Agile teams and defined scalable technical processes; Measured engagement through traffic analytics and NPS.

ReactJS & React Native Next.js Django REST Framework Python Typescript Docker AWS PostgreSQL Product Management Software Development Life Cycle Leadership of Cross-Functional Team
Mockups of the official BRASA mobile app on multiple phones
BRASA mobile app QR codes and mockups
BRASA website and portal screenshots
Relational database schema for the BRASA platform
UML database schema connecting everything in a single ecosystem
Relevance
  • Opera of the Future: Distributed sensing of community behavior.
  • Multisensory Intelligence: Real-world multimodal data at social scale.

If systems let communities speak, signals let individuals be heard. Revealing patterns of mind, behavior, and semantics.

Movement III — Signals

Where hidden patterns become audible and intelligible

Piece 4 — Brainwaves to Sound

How do people interpret and attribute meaning to sound generated from ambiguous biosignals when no explicit semantic mapping is provided?
Exploratory demo of EEG features mapped to study how listeners attribute meaning under ambiguous input

Relevance
  • Opera of the Future: Cognitive hyperinstrument exploring expressive sound, learning, and agency.
  • Responsive Environments: Physiological sensing with expressive feedback.
  • Multisensory Intelligence: Cross-modal representation of internal states.

Concept: Sonification system designed as an exploratory probe, mapping EEG signal features to generative sound without disclosing the mapping logic. Rather than prioritizing physiological accuracy, the prototype was designed to probe how listeners listen for coherence, agency, and meaning when sound is driven by an invisible, non-volitional signal.

Research Probe: Demonstrated in a classroom setting using prerecorded EEG data, allowing multiple listeners to simultaneously observe, interpret, and discuss the output in real time

Insight & Next Iteration: During discussion, listener questions focused on how the sound might be used compositionally, rather than on how one might influence or control it. This framing suggested that, in the absence of an interpretable feedback loop or locus of agency, listeners understood the output primarily as an aesthetic artifact rather than an expressive, meaning-bearing interface. As a next iteration, I aim to introduce live EEG input and a constrained feedback loop, allowing participants to explore intentional influence over sound and directly compare passive versus interactive meaning-making.

Python EEG Processing Timbre/Harmony Mapping

Signals shape meaning, but music gives that meaning form. A language that reveals how we think, imagine, and create

Movement IV — Language

Where ideas become sound and structure

Piece 5 — The Universe as a Musical

How can musical form communicate complex, non-linguistic structures such as cosmological processes?
DAW session view with layered textures and automation lanes
DAW session view and audio excerpt of "The Universe as a Musical"
Relevance
  • Opera of the Future: Explores musical narrative, expressive structure, and imaginative cross-disciplinary thinking foundational for new operatic and interactive musical systems.

Concept: This composition interprets the cosmological phenomena of black hole collision and creation of gravitational waves through musical form. This piece reflects my broader interest in how non-linguistic structures can be made perceptible and interpretable, a concern that later informs my work on interactive and adaptive musical systems.

Method & Implementation: Composed using a Digital Audio Workstation (DAW); Employed techniques such as spectral morphing, microtonality, and evolving textures to reflect scientific concepts musically

Digital Audio Workstation Extended Harmony Dynamic Timbre Design Narrative Composition

Supplementary note (optional): Written companion describing scientific inspiration and compositional structure of this piece is available.

Future Directions

Where This Work is Heading

Building on these projects, my future research will focus on developing adaptive musical systems that respond to cognitive, emotional, and behavioral signals while preserving human agency under uncertainty. I am particularly interested in instruments that learn from users over time, co-evolving expressive possibilities rather than prescribing them, and in deploying these systems within real educational and community contexts.