top of page

Loci: Language Learning Memory Palace for Meta Quest 3

Platform: Meta Quest 3, Interaction SDK, Location SDK

Role: XR UI Designer, Producer/PM, Sound Designer

Introduction

This project was for the Meta 2024 Hackathon hosted from April 1 - May 15, 2024. //elaborate

​

This group came together out of a mutual passion for XR Design and the power of this innovation. We knew that we wanted to work within the “Hobbies & Skill Building” category, and did some brainstorming exercises to get us started in a single direction.

Screenshot 2023-05-24 124620.png

The Figma map of User Interaction

Inspiration

Our inspiration came from the concept of a “Memory Palace” or “Loci” as described in Frances A. Yates’ book The Art of Memory (1966). In this book, she describes a technique for a practitioner to memorize various subjects by mentally mapping the desired information on the topic to the layout of a building, or a row of shops. Each room is then further divided into pieces of information the practitioner would like to memorize. To recall information, the practitioner “walks” through the Loci to retrieve an item of information.

​

We wanted to recreate this concept of a Memory Palace by helping our user to create Loci in Mixed Reality with Flashcards anchored in their own space. To simplify the process, we chose a subject (Language) and refined the subject (Spanish) for the purposes of this demo. This application and technique for memorization could conceivably be adapted to any subject.

What it does

After brainstorming, we settled on 3 key features we thought would make the exercise of learning a language more interesting and immersive:

​

  • Create: This is the default mode of the experience. The user is able to create a flashcard, using their voice to input the word (in English) they would like to have on the card. They can then place that card anywhere in the environment.

​

  • Randomize: Gives the user a pre-configured flashcard to place in the environment. These flashcards include native speaker audio playback and the option to load a 3D model representation of said object if the user wishes (or doesn't have said physical item in their room). The 3D model & flashcard are linked and follow each other if either is moved.

​

  • Practice: This quiz feature is intended for after the user has placed a few flashcards. All flashcard text in the environment becomes temporarily hidden, and the user is prompted with a Spanish word and points at the flashcard object they believe the word represents.

​

All of the flashcards and associated 3D models are persistent between sessions. The three modes listed above are launched via the wrist-button activated menu panel. The app is hand tracking only and requires microphone permission to use the flashcard creation feature.

The game "Alba" served a great inspiration for a nature based game about helping animals.

Untitled Goose Game, on the opposite end of the spectrum, served as inspiration for simple, easy to understand puzzles, in a delightful art style.

jxGR4Ii.gif

And of course, Pokémon Go sets the standard for outdoor AR experiences. We were fortunate enough to be a preferred partner with Niantic on this project.

How we built it

image.png

To start things off, we did a lot of research and brainstorming with Mural to collect our inspirations and plan our ideas. For a lot of the design planning we used Figma, modeling was done in Blender, and initial development in Unity using Meta's Presence Platform SDKs.

​

For development, some of the frameworks, APIs and SDKs that we used and how they were implemented are as follows:

​

  • Spatial Anchors: The foundation of Loci. A powerful and reliable feature for mixed reality experiences, opening up a host of possibilities for app creation using Quest Passthrough.

​

  • Interaction SDK: Poke Interactions, Hand Grab Interactions, Synthetic Hands, and Custom hand pose detection. We wanted an index finger pointing gesture for distance selection, and created it using Meta's Custom hand pose detection capabilities, which were pretty clearly explained in the documentation.

​

  • MLCommons Multilingual Spoken Words Dataset (open): This open dataset contains thousands of audio clips for native pronunciations of words. For this project, we included a database of some 17,000 spanish words and their english translations in text format, which can be searched by the user on the flashcard. For the pronunciation audio, as this was a small-scale demo, we hand-picked 9 common household words (the words in Randomize mode), and included 4 native pronunciations for each. We deliberately included a mixture of female, male, young and old voices as much as possible to enhance listening practice. We were inspired by the ability to listen to multiple native pronunciations at will, as understanding native speakers is something we have struggled with in our language learning journey.

​

  • Wit.AI via Meta Voice SDK (Dictation): We leveraged Meta's Voice SDK's Dictation feature to allow the voice-search during flashcard creation. This is activated by a flashcard button press, and returns the first word recognized by the audio input, comparing it against the MLCommons dataset which returns the Spanish translation. One challenge was that there are many duplicate words in the dataset, so voice dictation alone isn't sufficient to get perfect translation accuracy. Extending this feature to use Voice SDK's more advanced AI-powered voice interpretation could take it to the next level.

​

  • Visual Design Tools: Blender, Figma, DOTween (animations) among others were used to craft the visual language of Loci.

Beaver Minigame.jpg

An early brainstorm draft of what would become the Beaver mini-game, created by me. To encourage movement, the player would need to move around a mound of mud occluding targets on the other side.

The idea was to mimic a simple Whack-a-mole game, where the Ranger would help the Beaver plug holes.

Challenges

From a human point of view, lot of our difficulties stemmed from trying something new as a group of disparate technologists. We’ve never worked together as a team before, so a lot of initial challenges came from just getting coordinated and communicating regularly. Towards the end of the project we were crunching for time because we hadn’t stuck to previously established deadlines or that we didn’t know earlier what we would need later. But overall we would agree that we would have liked to have more time to polish towards the end.

​

The designers and developers were working in separate environments, the designers in Figma and the developers in Unity, and we weren’t always able to see what the other one was working on, or know when one group had completed something that the other needed to see. Definitely something we would work on together as a group in the future is to be more diligent in our transparency/communication with one another.

​

For development, one challenge involved allowing the user to move the flashcards while still supporting spatial anchors. Once a spatial anchor is created it can't be moved. Our developer was able to create a script that deletes and re-adds the card's anchor and all references to it whenever an object is grabbed and re-positioned, respectively. We found that for our use, it was performant enough and did not cause any frame drops.

​

Another challenge involved creating a wrist-based UI while dealing with hand occlusion. The wrist menu button sits like a watch face on the hand tracking wrist bone. We found that once the menu panel was expanded, its position needed to be unparented to the tracked hand to avoid jittering whilst the opposite hand interacted with the panel. We settled on a solution to have the panel free floating once activated, gently following the users gaze.

supermario.gif

For example, in Super Mario, the player's actions involve navigating obstacles left and right and jumping. Success is finding Princess Peach. Failure is hitting an obstacle.

image.png

A journey map/wireframe of what would become the Beaver mini-game. This is simply to establish the flow and mechanics, all timing/scoring notes would be subject to change depending on "finding the fun".

What we learned & what's next

As a team, all of us are new to participating in a Hackathon. We also had to learn how to work together as a team spanning across time zones (from California to New York). For some of our designers, it was an opportunity to explore further with tools such as Unity and Blender. For our developer, this was a chance to sharpen skills related to developing in mixed reality using Presence Platform, and unlocked ideas for future apps.

​

Inspired by the "bubble" motif of the art style, we developed an unobtrusive user onboarding / hint system we believe is perfect for mixed reality. A semi-transparent hint "bubble" very gently follows the users head gaze. It generally stays in the users field-of-view (assuming no rapid change in head position/rotation) but is designed not to linger in front of the user's face, rather off to the side. We also let the user "pop" the bubble if they want to get rid of it. as we want the user to maintain control over their environment.

​

This is our team's first hackathon, so ideally we'd like to try entering other hacks to sharpen our skills. As for Loci, it would be interesting to develop it into a larger project, covering more subjects and including more features to help the user create more elaborate memory palaces.

bottom of page