The Fiddler Innovation Fellowship is an annual student scholarship awarded by eDream to an undergraduate student who shows promise of significant, innovative achievements. The goal of the Fellowship is to inspire students to propose significant research projects that address cultural or global challenges that incorporate art and technology. With the support of eDream, the elected student will work with a faculty mentor or mentors of his/her choosing to develop and complete a substantial project. Funding for the Fellowship is based on student needs and the requirements of the proposed project.
Support provided by the Fiddler Innovation Endowment Fund.
Jewel Ifeguni – communications
Jewel Ifeguni, is the founder and CEO of YouMatter Studios, a virtual reality (VR) media startup focused on diversity and inclusion in media. Driven by the lack of representation of marginalized groups in the video games, Jewel Ifeguni developed her first venture, QUEEN, a video game to empower young black girls like her sister. She then launched YouMatter Studios to innovate “the way we understand representation.”
YouMatter Studios is spearheading and empowering marginalized voices through digital storytelling and immersive workshops. Ifeguni is passionate about creating VR films, web series or workshops to spark important dialogues and innovate representation. She is dedicated to creating content that will enlighten, empower, and inspire future generations. Currently, Ifeguni and her team are working on a threepart VR film that addresses relations between police and marginalized groups.
Colter Wehmeier – architecture
Currently: Researcher at the Cyprus Institute
Mentor: Donna Cox
Proposal Abstract: As our lives become increasingly mediated by the internet, digital experiences take on an alarming amount of power in shaping reality. Like earlier mass media technologies, the internet frames our perception of the environment, other people, and ourselves. While the web radically decentralizes authority and democratizes access to information, the interplay of human psychology, the architecture of the internet, and its contingency on the real world, transforms the human condition in an unprecedented fashion. The essence of this change has less to do with the quantifiable aspects of computers than with the inherently complex and often unintuitive aspects of the people who use them.
RIVEEL3D is a digital archeology database embodied in a comprehensive 3d scanned model of Nicosia, Cyprus. I am responsible for user interface development in Unity 3d, and have experimented with VR headsets and physically tracked controllers to realize a museumquality educational experience. I’ve come to realize the limits of immersion/interaction in VR and have thus shifted my interests towards how we can use real physical space to map information in a humanintuitive way. Rather than build up an isolated virtual museum experience, I’m interested in letting smartphone users access data at real locations, based on what conceptual threads they follow. Full text of Proposal Abstract & Research Interests
In his time as an AVL SPIN Fellow, Colter worked on several major projects.
- Designed and implemented GUI system, VR interaction model, and data recording/management for RIVEEL3D application (C# – Unity 3D)
- Coordinated communications between NCSA and the Cyprus Institute
- Devised software pipeline for translating photogrammetric and lidar data between archeology database and real time visualization engine (Unity 3D)
- Documented and instructed researchers about RIVEEL3D software
- Planned and directed student research office renovation project
Patrick D. Aleo – Ph.D. Student, Astronomy
Currently: 2nd year Ph.D. Student in Astronomy, University of Illinois at Urbana-Champaign
Mentor: Donna Cox
Proposal Abstract: For thousands of years, humanity has described the movement and wonder of the night sky through stories. These stories catalyzed research in both observational and theoretical domains of astronomical sciences, and have sprung forth the golden age of astrophysical computation, simulation, and visualization of today. My project, Estra, enables scientists to become their own storyteller through automating, informing, and improving astrophysical data visualization using machine learning algorithms. This approach utilizes “physically interpretable” clusters—clusters identified in a particular phase space corresponding to physically meaningful structures within the simulation data—to inform the color mapping transfer function, in addition to building a simple yet powerful shading network to map opacity, brightness, falloff, and other attributes.
In the current landscape, astrophysical data visualization is confined to small teams like the Advanced Visualization Lab at NCSA with the tools, knowledge, and ability to create cinematic, data-driven visualizations. The purpose of Estra is to immensely expand this reach, and guide scientists to create their own cinematic, data-driven visualizations for publication, museum and TV shows, public outreach, etc. By seamlessly blending the art of storytelling with cutting-edge AI technology, they can cater their visualizations to emphasize a particular theme, structure, message, or story to their intended audience. Although Estra is designed for astrophysical data, a primary goal is to expand its capabilities to handle simulations agnostic to its field of study, such as agriculture, atmospheric sciences, biochemical processes, and beyond, to allow scientists of all domains to become their own storyteller.
“Estra: Clustering Methods for Astrophysical Data Visualization in the Moon-forming Synestia Simulation”, Patrick D. Aleo et al 2019 – In preparation
Kyungho Lee – Ph.D. 2019, Illinois Informatics Institute
Mentors: Donna J Cox, Guy Garnett
The power of visualization arises from its capability to apply metaphors and semantics of graphical elements to help people perceive and understand new information in terms of their prior experiences and knowledge. Although various metaphorical visualization methods have become increasingly researched, only a few empirical studies exist which explore the matter of how people perceive and comprehend information represented by visual metaphors (visaphors) in interactive systems. The aim of my research is, therefore, to investigate how visaphors can convey the essence of information, and which aspects of visaphors can enhance the experience of the user using quantitative and qualitative methods.
Kyungho Lee explores the potential of machine learning techniques to design intelligent interactive systems with an emphasis on the use of expressivity in body movement. He is a Ph.D. student at the Illinois Informatics Institute at the University of Illinois at Urbana-Champaign. He is a Fiddler Innovation Endowment Research Fellow and former Fulbright Scholarship recipient.
Kyungho has been working with Dr. Donna Cox and Dr. Guy Garnett on the MovingStories project, which is an SSHRC-funded interdisciplinary, collaborative institutional research initiative for the design of digital tools for movement, meaning, and interaction. In the project, he used one of the XSEDE’s supercomputing facilities, Stampede (TACC), to analyze expressive conducting gestures to have better understandings of expressivity and experiential qualities of human body movement. His research outcomes have been published and exhibited in various venues such as IEEE VIS, ACM C&C, ICMC, ISEA and ACM SIGGRAPH Asia.
Kyungho pursued a dual degree program at the Seoul National University in Korea, where he majored in Interaction Design and Information, Culture, and Technology Studies. Before he started a doctorate, he worked as an interaction designer with various clients for web services, mobile apps, and home appliances in Korea about six years.
Michael J Junokas – Ph.D, Arts and Cultural Informatics
Michael J Junokas has a PhD in Informatics from the University of Illinois. His research is in developing innovative, multi-platform systems that have the ability to gather, interpret, process, and control signals in live artistic performance. Through the exploration of these systems, he creates non-linear methods of technological exploration that provoke artistic introspection and aesthetic reflection.
Mike is currently working with Dr. Robb Lindgren on ELASTIC3S, an NSF Cyberlearning research project conducted by an interdisciplinary team of learning scientists and computer scientists at the University of Illinois at Urbana-Champaign. The goal of this project is explore ways that body movement can be used to enhance learning of “big ideas” in science. For the project, he has focused on developing gesture-recognition algorithms trained using one-shot learning. His research from this project has been published in ICLS, NARST, and JCAL, and has been used to inaugurate the Illinois Digital Ecologies and Learning Laboratory.
Mike has also worked with Dr. Guy Garnett on the MovingStories project, an SSHRC-funded interdisciplinary, collaborative institutional research initiative for the design of digital tools for movement, meaning, and interaction. His research from this project has been published in ACM C&C, ICMC, and ISEA.
Mike’s artistic and musical work (https://vimeo.com/junokas) has been exhibited at a variety of venues including McGill’sTransplanted Roots:Percussion Research Symposium, Illinois Wesleyan’s New Music Series, Illinois State’s New Sound Series, the School of the Art Institue’s Sullivan Galleries, and Experimental Sound Studio’s Outer Ear Series.
Austin Lin – theater
Currently: Head of Technology at the White House
Project: Making Art Happen
Mentor: Donna Cox
Proposal Abstract: Today's technological world is probably best summed up by the now famous phrase: "there's an app for that". We now have so many applications (desktop, web, mobile, etc.) that end users are forced to use application after application to accomplish anything. Consider photography, we have applications for importing photos from your camera, for sharing photos, for editing photos, for printing photos, for creating movies from photos and so on. This situation is just as true in the "advanced" areas of computing like HPC and visualization. The next step in computing is not creating more systems but rather linking the systems we already have to create more powerful, more accessible and more intelligent systems. There is no magic bullet to this problem, and solving it is certainly beyond the scope of a one semester project or any one organization, but we can begin to create standards that move us towards more connected systems.
To this end, I propose to research and create recommendations for a control standard for interactive systems. The control standard would define how control end points are advertised, how messages are passed and how control devices or applications associate with the interactive system. In my researching I would focus on building upon existing standards when possible and work closely with my mentors at NCSA as well as others in the campus community who work with interactive systems. One example of an existing standard that I would utilize is the Open Sound Control (OSC) standard which is a message passing standard widely used in the performing arts world. OSC has an easy to understand address syntax, a large base of existing applications and a flexible specification making it an ideal starting place. I have also chosen to begin working with OSC because of my familiarity with it and the existing interest within NCSA's Advanced Visualization Lab (AVL).
The potential uses of such a control standard are far reaching, but a few that are specific to NCSA include: controlling AVL's vMaya from an iPad; feeding data from a performance venue to a simulation running on HPC resources at NCSA which in turn creates the graphics used in the performance; and synchronized showings of an interactive simulation in which control signals are fed into the simulation from multiple geographically distant locations. These are somewhat dry technical examples but the impact that the technology offers is profound. It means that the next Steven Hawking or Carl Sagan might be inspired by making galaxies collide using an iPad at Adler Planetarium. It means that using the same simulation, a dance choreographer might make galaxies collide using dancer's bodies and a Kinect, inspiring an entire audience. Suddenly a single simulation or application can do so much more and affect so many more people. This is what is possible when a control standard exists for interactive applications; it allows a community of people to build systems that can connect and in doing so become more than the sum of their parts. It democratizes the technology making it available to those who don't have the resources of NCSA.