Siren is a hybrid system for algorithmic composition and live-coding performances. Its hierarchical structure allows small modifications to propagate and aggregate on lower levels for dramatic changes in the musical output. It uses the functional programming language TidalCycles as the core pattern creation environment. Borrowing from Tidal, Siren augments the pattern creation process by introducing various UI features: a multi-channel sequencer, local and global parameters, mathematical expressions, and pattern history. It presents new opportunities for recording, refining, and reusing the playback information with a pattern roll component.
Siren is an open source project with Can İnce as its initiator and principal developer and Mert Toka as a core contributor. It has been featured on the TOPLAP website (the "home of live coding"), as well as in the crowdfunded 2017 book PUSH TURN MOVE: Interface Design for Electronic Music. A screencast video of Can İnce performing with Siren at Algorave's "5th birthday live stream" is available online, showing how an earlier version of the software is used in action.
İnce, C., Toka, M., Baytaş, M.A. (2018). Siren: Interface for Pattern Languages. In Proceedings of the 2018 International Conference on New Interfaces for Musical Expression (NIME 2018). [forthcoming]
The concept of computational heirlooms relates to the contrasting notions of “permanance and disposability,” “the digital and the physical,” and “symbolism and function” in the context of interaction design.
Drawing from diverse knowledge streams, we articulated a novel design direction for enduring computational heirlooms based on the dyad of decentralized, trustless software and durable mobile hardware. To justify this concept, we reviewed prior research; attempted to redefine the notion of “material;” proposed blockchain-based software as a particular digital material to serve as a substrate for computational heirlooms; and argued for the use of mobile artifacts informed in terms of their materials and formgiving practices by mechanical wristwatches as its physical embodiment and functional counterpart. This novel integration is meant to enable mobile and ubiquitous interactive systems for the storing, experiencing, and exchanging value throughout multiple lifetimes; showcasing the feats of computational sciences and crafts; and enabling novel user experiences.
Baytaş, M.A., Coşkun, A., Yantaç, A.E., Fjeld, M. (2018) Towards Materials for Computational Heirlooms: Blockchains and Wristwatches. In Proceedings of the 2018 Conference on Designing Interactive Systems (DIS 2018). [forthcoming]
ViewFinder is a cross-platform mobile application made to support the installation and reconfiguration of marker-based motion capture systems with multiple cameras.
ViewFinder addresses a common issue when installing or reconfiguring motion capture systems: that system components such as cameras and the host computer can be physically separate and/or difficult to reach, requiring personnel to maneuver between them frequently and laboriously. ViewFinder allows setup technicians or endusers to visualize the output of each camera in the system in a variety of ways in real time, on a smartphone or tablet, while also providing a means to make adjustments to system parameters such as exposure or marker thresholds on the fly. The app has been designed and evaluated through a process observing user-centered design principles, and effectively reduces the amount of work involved in installing and reconfiguring motion capture systems.
ViewFinder is based on previous work by the development team at Qualisys AB, and an interaction design master's thesis project by Emmanuel Batis and Mathias Bylund at Chalmers University of Technology.
Baytaş, M.A., Batis, E., Bylund, M., Çay, D., Yantaç, A.E., Fjeld, M. (2017). ViewFinder: Supporting the Installation and Reconfiguration of Multi-Camera Motion Capture Systems with a Mobile Application. In Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia (MUM '17)
LabDesignAR is an augmented reality application to support the planning, setup, and reconfiguration of marker-based motion capture systems with multiple cameras. It runs on the Microsoft HoloLens and allows the user to place an arbitrary number of virtual “holographic” motion capture cameras into an arbitrary space, in situ. The holographic cameras can be arbitrarily positioned, and different lens configurations can be selected to visualize the resulting fields of view and their intersections. The features in LabDesignAR are mainly inspired by the Qualisys Lab Designer web application, and adapted for augmented reality.
LabDesignAR also demonstrates a hybrid natural gestural interaction technique, implemented through a fusion of the vision-based hand tracking capabilities of an augmented reality headset and instrumented gesture recognition with an electromyography armband. The code for LabDesignAR and its supporting components are open-sourced.
Baytaş, M.A., Yantaç, A.E., Fjeld, M. (2017). LabDesignAR: Configuring Multi-camera Motion Capture Systems in Augmented Reality. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST ‘17).
First, to introduce a wider audience of researchers to this field of inquiry, and to situate our work within existing research, we conducted reviews of previous works that utilized motion capture to study the kinematics of sign and gesture production. We presented the preliminary results from our review, along with comments on technical and methodological issues, as a poster at the DComm conference Language as a Form of Action.
Baytaş, M. A., Çay, D., Yantaç, A. E., & Fjeld, M. (2017). Motion Capture in Gesture and Sign Language Research. Poster presented at the DComm conference Language as a Form of Action.
This project was undertaken as a prelude to an interaction design project that aimed to develop user interfaces (hardware and software) for performing loop-based, live-sequenced electronic music (e.g. techno, house. etc.) with performer-device interactions that are emotive and legible to the audiences.We were interested in the question of how watching a live-sequenced electronic music performance, compared to merely hearing the music, contributes to spectators’ experiences of tension. We explored this question via an experiment based on Vines, Krumhansl, Wanderley & Levitin’s 2006 work on cross-modal interactions in the perception of musical performance. We also explored the role of the performers’ “effective” and “ancillary” gestures in conveying tension, when they can be seen.
We conducted an experiment where 30 participants heard, saw, or both heard and saw a live-sequenced techno music performance recording while they produced continuous judgments on their experience of tension. Eye tracking data was also recorded from participants who saw the visuals, to reveal aspects of the performance that influenced their tension judgments. We analysed the data to explore how auditory and visual components and the performer’s movements contribute to spectators’ experience of tension. Our results show that their perception of emotional intensity is consistent across hearing and sight, suggesting that gestures in “non-instrumental” live-sequencing can be a medium for expressive performance.
Baytaş, M.A., Göksun, T., & Özcan, O. (2016). The Perception of Live-sequenced Electronic Music via Hearing and Sight. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2016).
Hotspotizer allows users without computer programming skills to design, visualize, save and recall sets of custom full-body gestures, for use with the Microsoft Kinect. These gestures are mapped to system-wide keyboard commands which can be used to control any application.
Hotspotizer was my interaction design master’s thesis project at the Koç University Design Lab. The software, as well as my thesis (in LaTeX), are open source.
Hotspotizer has been featured on Microsoft’s Channel 9 Coding4Fun Kinect Projects blog, and utilized in educational contexts.
Baytaş, M. A., Yemez, Y., & Özcan, O. (2014). Hotspotizer: end-user authoring of mid-air gestural interactions. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction (NordiCHI ‘14).
Baytaş, M.A. (2014). End-User Authoring of Mid-Air Gestural Interactions. MA thesis submitted to the Koç University Graduate School of Social Sciences and Humanities.
Baytaş, M. A., Yemez, Y., & Özcan, O. (2014). User Interface Paradigms for Visually Authoring Mid-Air Gestures: A Survey and a Provocation. In Proceedings of the Workshop on Engineering Gestures for Multimodal Interfaces.
Ünlüer, A. A., Baytaş, M. A., Buruk, O. T., Cemalcilar, Z., Yemez, Y., & Özcan, O. (2017). The Effectiveness of Mime‐Based Creative Drama Education for Exploring Gesture‐Based User Interfaces. International Journal of Art & Design Education.
This position paper proposes the re-reading of past artifacts and traditions as a possible way to inspire the design of future media on non-flat displays.
As an example, we illustrate how different narrative typologies found in ancient Greek vases - circular story reading, bottom-up time reading, abstract and realistic contrast reading, and reading in alignment - can yield inspiration for interactive content, specific to spherical media. We conclude by pointing out design considerations regarding the composition of graphical elements on spherical surfaces.
Özcan, O., Ünlüer, A., Baytaş, M.A., & Serim, B. (2012). Rethinking Spherical Media Surfaces by Re-reading Ancient Greek Vases. Paper presented at the ITS ‘12 workshop Beyond Flat Displays: Towards Shaped and Deformable Interactive Surfaces.
For my bachelor's capstone project in mechanical engineering, I worked on a MEMS biosensor project at Koç University's Optical Microsystems Laborator (OML). The multi-analyte MEMS biosensor used an array of coated μ-cantilevers that shift their resonant frequencies upon analyte mass accretion, allowing the detection of analyte concentrations. The cantilevers are magnetically actuated and their resonant frequencies are observed via interferometric optical readout. The remote and wireless chip is intended for use within a portable device. I designed and implemented a custom GUI and mechanism for setting up characterization experiments by directly manipulating the position of the chip relative to the laser beam. The system then traverses the μ-cantilever array and collects data without supervision.
As an undergraduate researcher at Koç University's Manufacturing & Automation Research Center (MARC), I worked on design, development, fabrication, and programming tasks for a custom experimental laser manufacturing workstation; a versatile machine with marking, cutting, engraving and powder sintering capabilities. The workstation utilized a 10.6 μm CO2 laser coupled to a 3-axis CNC positioning system, as well as a galvo-driven 1064 nm Nd:YAG laser.
The machine was controlled via a custom UI and back-end developed in MATLAB, and an Arduino. The software, which I partly developed, also supported toolpath and G-code generation from STL models. The microcontroller and peripheral electronics, which I partly designed and realized, received user input from the software and controlled the machine's industrial CO2 laser, AC servos, galvanometric scanner and powder sintering bed mechanism with precision. The chassis was designed, mechanically analyzed, fabricated and hand-assembled by a team of three.