8/28 13:30-15:00

  1. Manzai Karaoke: A Real-time Support System Enabling Novices to Perform Manzai
    • Shunta Komatsu, Tomonori Kubota, Satoshi Sato, Kohei Ogawa
      • Abstract: Two-person comedic performances are usually spectator entertainment, yet performing manzai has emerged as an engaging activity offering entertainment and enhancing conversational skills. However, performing manzai presents difficulties, requiring script memorization and mastery of expression like intonation, movements, and timing. This research introduces "Manzai Karaoke," visual guidance system that enables novice participants to perform manzai through real-time script display and nonverbal cues like karaoke. To create an interface satisfying the requirements—facilitating performance while maintaining audience engagement—we conducted two-step prototyping. We first validated real-time comedy performance assistance feasibility, finding that essential guidance on intonation, emotion, actions, and timing is crucial, not just script display. Second step showed the improved interface met requirements. This study advances entertainment computing by demonstrating visual guidance for comedy.
  2. Initial study of VR music appreciation by linking music and video
    • Sena Nemoto, Tetsuro Kitahara
      • Abstract: With the spread of music streaming services, abandoning music listening midway has become an issue. In order to encourage people to listen to unfamiliar songs to the end without getting bored, this study focuses on video and interaction. Specifically, we aim to improve the completion rate by using VR technology to allow people to actively engage with visual objects linked to the music while listening. In this paper, we report on a prototype VR system that we has been developing to achieve this.
  3. Towards Semi-Automated Drill Generation for Marching Bands
    • Taison Iino, Teturo Kitahara
      • Abstract: This research proposes a system for semi-automated generation of marching band drills. Creating drills is a time-consuming and labor-intensive process that requires expert knowledge and experience, but there are no current studies that support the process of creating drills for marching bands. In this paper, we address our attempt towards creating a system that semi-automatically generates drills from user-defined formations.
  4. Kazuho Hayashi, Homei Miyashita
    • TaPrompt: A Tap-based System for Structured Prompt Modification
      • Abstract: High-quality prompts are essential for effectively utilizing generative AI. Automated prompt generation techniques, however, frequently fail to reflect the user's detailed intent, leading to repeated textual modifications. In this paper, we introduce an interface for efficient prompt modification. It provides users with selectable options from the perspectives of the task objective, role assignment, contextual information, output format specification, and reasoning steps.  We expected that through tap-based interactions alone, users can swiftly acquire the desired generated outcomes.
  5. A Proposal of an Information Delivery Method using Human Movement as a Communication Medium for Electronic Paper Signage
    • Takafumi Akiba, Tsubasa Yumura
      • Abstract: Remote updating of electronic paper signage (EPS) is difficult in environments with unstable communication infrastructure. In this research, human movement is used as a communication medium to realize content distribution between EPS terminals. Specifically, a mobile terminal such as a smartphone acquires the latest data from an EPS and temporarily stores it. Then, when it approaches another EPS, it transfers content via Bluetooth Low Energy (BLE) communication, enabling content distribution through opportunistic connections. This paper presents a prototype implementation and verifies content delivery and screen updates between e-paper signage via mobile devices. The exchanged data is in JSON format, and the latest content is determined based on version comparison. This approach aims to ensure that EPS can function as a stable means of providing information even in environments where communication is restricted, such as during disasters.
  6. DeepBreathVR: A Proposal for Deep Breathing Interaction Focusing on Abdominal Movement
    • Hiroo Yamamura, Tomoya Sasaki, Atsuko Miyazaki, Atsushi Hiyama
      • Abstract: VR content that integrates breathing and entertainment elements has gained attention as a way to make breathing training more enjoyable. However, conventional breath-interactive contents typically require dedicated sensors to detect the user's respiration, which increases both the implementation cost and the physical burden on the users. In this study, we propose a breath-guided VR system that detects breathing without additional sensors by placing a VR controller on the abdomen. The system provides visual feedback that changes according to the depth of the user's breath, utilizing the abdominal movement associated with respiration.
  7. Thinking by Hand: Storybook Creation from Brick Building with AI Support
    • Mondheera Pituxcoosuvarn, Yohei Murakami
      • Abstract: In this paper, we presented an extended version of BricksTory that supports creative exploration through hands-on building and AI-generated storytelling. By introducing storybook compilation and embracing ambiguity in narrative feedback, the system encourages children to build even without a fixed idea and to reinterpret their creations in surprising ways. Our informal observations suggest that misrecognition by the AI can act as a spark for creativity rather than a barrier. BricksTory demonstrates how co-creation between children and AI can foster reflection, imagination, and personal narrative development—turning free play into an emergent storymaking experience.
  8. Development of Sound Field Switching System Linked to Multi-Camera Switching
    • Yuri Fujimura, Akinori Ito
      • Abstract: This paper reports on an attempt to implement a system that enables spatial sound design by synchronizing interactive switching between multiple cameras with diverse impulse response-based reverberation effects. The main components of the system are TouchDesigner and Reaper. In the proof-of-concept experiment, the researcher successfully switched between multiple IR data sets recorded by themselves. This approach holds promise for future applications, such as freely adding high-quality reverberation recorded in real-world spaces to virtual environments like the metaverse.
  9. Basic Study on Spaciousness Estimation due to Changes in Reverberation Level from Top Layer in Immersive Audio
    • Shota Konno, Akinori Ito
      • Abstract: In this study, we investigated whether changing the ratio of sound pressure levels of delay and reverb of sounds reproduced by upper speakers in a three-dimensional sound field on a DAW affects the perceived difference in spaciousness estimation. We conducted an impression evaluation experiment using male voices lasting one second or longer and footstep sounds lasting less than one second as experimental stimuli. The results suggest that the delay effect is significant for short-duration spike sounds such as footstep sounds, while the reverb effect is more pronounced for sustained sounds such as male voices.
  10. Differences between Speaker and headphone listening in Action Game using Immersive Audio
    • Toshinao Saikawa, Akinori Ito
      • Abstract: In this study, we conducted an experiment to investigate the effects of differences in playback environments between 7.1.4-channel speakers and headphones on the perception of moving sound images in action games. The experimental stimuli were scenes created in Unity using the sound of a falling bomb. The sound image was moved downward from the height channel. Subjective evaluations and game scores were collected. Although no statistically significant differences were found, a slight tendency toward higher height perception was observed in the 7.1.4ch environment.

8/29 13:30-15:00

  1. The Transition of the Relationship between the Design Concept of Home Video Game Console and Game Software: A Case Study of Sega
    • Kenji Ono, Naohiko Yamaguchi
      • Abstract: There is a relationship between the design concept of the content playback player and the content being played. Among them, home video game consoles and game software are good examples. In this study, we conducted a literature review and interviews with development parties regarding the design concept and software development system of Sega's home video game consoles. As a result, it became clear that the design concept of Sega's game machines has gradually changed from a dedicated game machine to a general-purpose machine. We also found that the technological assets accumulated in this process became the foundation for the development of game software after the company withdrew from the hardware business.
  2. Improvement and Usability Evaluation of a Robot System for Gaze Visualization and Visual Exploration Support for People with Physical Disabilities
    • Chihiro Sakai, Ory Yoshifuji, Yutaro Hirao, Monica Perusquía-Hernández, Hideaki Uchiyama, Kiyoshi Kiyokawa
      • Abstract: Approximately half of disability certificate holders in Japan have physical impairments, which may cause restricted vision and speech difficulties due to nerve damage or muscle weakness. These limitations hinder environmental awareness and communication, making pointing-based interaction with caregivers difficult. To address this, the author developed a robotic system in 2024 that enables gaze visualization and visual exploration. Users can look around freely and share their gaze point as a pseudo “pointing” gesture to indicate an object of interest. This study reports improvements to clarify the target by replacing the gaze module with a laser pointer, enhancing the robot’s exterior for better durability, and evaluating usability through an exhibition. When the improved robot was shown at the “World ALS Day in Nagoya,” feedback from people with physical impairments, caregivers, and attendees suggested its effectiveness and offered ideas for further refinement.
  3. A Study on Imperceptible Control Assistance for Narrow Path Navigation Tasks Using Joystick Input
    • Taichiro Yui, Kentaro Fukuchi
      • Abstract: Computer games sometimes introduce control assistance for beginners, but obvious assistance can potentially negatively impact the playing experience. In this study, we developed a method that provides fall prevention assistance only when the stick input direction changes, as an imperceptible assist for players in a task involving navigating across a narrow bridge. Results from user experiments suggest the possibility of contributing to performance improvement without players noticing the assistance.​​​​​​​​​​​​​​​​
  4. A Study on Background Music Recommendation for a Manga Viewer
    • Ryoga Takarada, Ryusei Hayashi, Tetsuro Kitahara
      • Abstract: This study investigates a background music recommendation system designed for a manga viewer. The integration of visual content and music has been demonstrated to enhance emotional guidance and reader immersion in anime and games. However, few studies have explored the addition of background music to manga. In this paper, we propose a system that extracts scene tags from unannotated manga images and assigns appropriate background music using freely available, commercially usable music resources, without generating new music.
  5. A Support System for Music Score Reading for Novice Piano Players
    • Risako Shono, Junko Fujii, Tetsuro Kitahara
      • Abstract: This study aims to develop a support system that helps novice piano players learn to read music scores. For beginners, reading sheet music can be challenging. To enable such players to play the piano, systems like Synthesia have been developed, which guide melodies using game-like visualizations. However, these systems do not help learners acquire music reading skills. In this paper, we propose a system that facilitates music score reading by gradually transitioning the visual guidance from game-like representations to traditional score-based notation.
  6. A Study on Agents for Recommending Products Across the Entire Shelf
    • Gen Ono, Takuya Iwamoto, Ryo Miyoshi, Yuki Okafuji, Soh Masuko
      • Abstract: While the "golden zone" on retail store shelves attracts high visibility, products placed outside this area often receive less attention, limiting the effectiveness of advertisements. In this study, we developed a recommendation system that displays an agent on a shelf-edge digital signage to point at arbitrary product locations. A laboratory experiment demonstrated that the system can guide viewers’ gaze regardless of product placement, suggesting its potential to draw attention even to products in low-visibility areas.
  7. Evaluation of Interaction with Nature Using Mobile Applications in Green Space in Commercial Facility.
    • Yutaro Miki, Ruka Uesugi, Koto Cho, Sho Mitarai, Nagisa Munekata, Takaaki Nishida
      • Abstract: The realization of a society that symbiosis with the natural environment requires citizens to engage with nature. While it has become easier to record one's behavior using mobile applications, there has been little research on behavior promotion. Therefore, this study developed a mobile app called “Morisodate” with the aim of promoting app users' engagement with nature through gamification-based intervention. In this field trial, we introduced a camera system and conducted surveys to establish a mechanism for evaluating app users' behaviors from both quantitative and qualitative perspectives. The results of the pilot study showed that 25 groups used the mobile app, with a total of 128 actions recorded. The camera system also demonstrated a certain level of accuracy in tracking app users' behaviors.
  8. Game Controller Button Extension Enabling Ballistic Motion Input and Its Deployment in Society
    • Kazutaka Kurihara, Ayaka Maruyama
      • Abstract: Many fighting game players, even when playing on home consoles, use arcade-style controllers to achieve better functionality and ease of operation. This research proposes a method for extending game controller buttons to enable rapid pressing through ballistic motion, which optimizes performance according to Fitts’s law, and reports on its real-world deployment.
  9. JumpLab 2D: A Web-based Educational Tool for Teaching 3C through Parameter Adjustment in 2D Jump Games
    • Keita Yamazaki, Kentaro Fukuchi
      • Abstract: We developed JumpLab 2D, a web-based educational tool for beginners in game development to learn parameter adjustment related to video game 3Cs (Character, Camera, Control). Building on the original JumpLab, this tool enables learners to manipulate character behavior in the browser while adjusting parameters. By integrating explanations and exercises related to the parameters on the same page, it addresses self-directed learning difficulties and instructional clarity issues identified in the previous version.
  10. Soccer Juggling MR Training System with Haptic Feedback
    • Rento Onzoro, Masataka Imura
      • Abstract: Soccer juggling is a fundamental skill for ball control. However, beginners often struggle to acquire the correct feel, which requires extensive practice leading to demotivation. To address the problem, we propose a training system that improves learning efficiency by combining Mixed Reality (MR) with multimodal feedback. In an MR environment, the system tracks the user's foot in real time as they interact with a virtual ball. The proposed system has the following features: (1) haptic feedback via vibration to indicate the ball's precise contact point on the foot; (2) a visual replay function to analyze kicking form from multiple angles; and (3) adaptive difficulty control that adjusts the ball's speed to the user's proficiency level. This integrated approach creates an intuitive and effective training environment to facilitate skill acquisition.
  11. Development of Effect Voice Generator Specialized in Expressing Emotions during Responses
    • Kiyotake Ishikawa, Akinori Ito, Koji Mikami
      • Abstract: In this study, we propose a system for procedurally generating intermediate sounds between sound effects and voices for use in anime and game production. The design of the sound generation parameters was based on an analysis of existing works using transcription, combined with the results of an impression evaluation experiment on the emotions and intentions of characters as perceived by viewers. Considering integration with game engines, the system was implemented using Unity and Csound Unity.