HXRC Team: Inside VCC and Yle’s Immersive Spatial Journalism Project ‘Shadows of the Stabilisation Point’

Oct 23, 2025

What happens when elements of photojournalism are combined with augmented reality? The result is an immersive documentary experience that not only helps us understand, but also allows us to feel the human impact of an ongoing war.

In this blog post, our HXRC team members Jessica Kivi and Irina Konovalova share their experiences creating an AR web application that places the user in the middle of a real Ukrainian frontline clinic, telling the stories of patients and the medical teams working tirelessly under extreme conditions. The project was carried out in collaboration with Yle Ulkolinja and Helsinki XR Center’s Virtual Creation Core initiative.

Text by Jessica Kivi and Irina Konovalova

‘Shadows of the Stabilisation Point’ was a one-of-its-kind collaborative project between the Finnish national broadcasting company Yle and Helsinki XR Center’s Virtual Creation Core initiative. This innovative spatial journalism implementation blended augmented reality elements with traditional photojournalism to create an immersive storytelling experience. The project aimed to bring the harsh realities of Ukraine’s ongoing war closer to the audience through a powerful approach, resulting in an AR web application that places users in the middle of the touching stories of the patients and healthcare workers within a real-life Ukrainian war clinic. The final documentary experience was launched to the public in March 2025. It is available in both Finnish and English on Yle’s website and can be accessed on mobile devices.

You can visit the site here. Please note that the experience contains material that may be disturbing. 

Trainee introduction: Jessica Kivi

Hi! I’m Jessica Kivi, a 3D Artist intern at Helsinki XR Center and a fresh graduate from Metropolia’s 3D Animation and Visualization studies. In this project I was responsible for all the visual designing and cleaning of 3D models. I worked alongside Irina Konovalova, who will later on tell more about the coding side of this project.

Trainee introduction: Irina Konovalova

Hi, I’m Irina, a Developer intern here at HXRC and a Information and Communication Technology student at Metropolia UAS. In this project, my part focused on programming the application, including scene transitions, visual effects, and creating the user interface. Even though I have previous experience with web development and JavaScript programming, this project was quite different from what I have worked on before. With emphasis on scene rendering with models, animations, and media content, it was definitely not a typical web application to begin with. Thus it required me to think beyond what I’ve learned about web development before.

Where it all started

Our client in this project was Yle Ulkolinja, a documentary series focusing on international affairs and topics. whom we worked very closely with throughout the whole project. Yle was preparing to release a documentary about a Ukrainian stabilisation point and they wanted to create an AR web application to accompany the documentary’s storytelling. Yle had previously visited the stabilisation point for filming purposes, and gave us lots of materials to work with, including interviews, videos, and photos.

Into the deep end – a race against time

When us interns joined this project, we had around two weeks to create a roadmap, and two months for implementation. This meant that time needed to be used wisely and taken into account when planning on executing the visuals.

Time constraint wasn’t the only challenge for us in this project. Since this was going to be a web application utilising AR, it was decided that the platform 8th Wall would be used to assemble it. Neither of us had any previous experience with this platform. From an artist’s point of view, it meant that I would not be able to assemble the final scene myself. However, I gave instructions on how the lighting and UI should look like. This was a new method for me, which in hindsight saved me lots of time, but also emphasized the importance of clear and efficient communication.

On the left side is a snippet of the Figma view that I made for the UI design guide. On the right side is a breakdown of what the lighting should tell the viewer and the changes between scenes.

Working efficiently – how to create balanced assets under tight deadlines

Yle provided us with photographs that our project lead Narmeen Marji turned into photogrammetry scans. In short, photogrammetry scans are 3D objects that have been assembled from 2D data – in this case, from photographs. The scans produce a very realistic look, especially if the original data is high quality. The downside to the photogrammetry scans is that they need a lot of cleaning up in order to be somewhat close to optimized, and since this project needed to be able to run on mobile devices, optimization was crucial.

This was my first time working with photogrammetry and given the deadline, I had to learn the clean up process fast. The client had also requested that the assets should look close to the original scans, so I wasn’t aiming for anything too polished. In brief, the workflow consisted of deleting everything extra, closing up holes, and decimating the polycount.

If the scan was completely unusable due to large deformities in the mesh, I had to do some remodeling. This process was mostly the same as cleaning up the scans, except that after cleaning up, I made a new lowpoly mesh that I baked textures on from the original. All in all photogrammetry scans give quick results, but are something you can spend countless hours on cleaning. 

In the first picture is a comparison of the original scan and the game asset that I made. In this case the original scan was in such a rough shape that remodeling it was faster. The second example is of a defibrillator that I followed the basic workflow with. 

Another big task for me was making characters with animations. Usually, creating characters from scratch and manually animating or motion capturing them is a very time consuming process, so I needed to find ways to cut some corners. I had previous experience with the MakeHuman addon for Blender, that you can use for making humanoid characters, and I knew that it included a game-engine-friendly rig.

Creating the character meshes this way was fast and I got great reference pictures from Yle, as the characters were based on real people. MakeHuman also has a wide variety of clothes and hairstyles that I used as the base. I did some minor tweaks by hand and added some small meshes, like pockets and extra hair strands. I did not have to pay any attention to the textures, since the characters were all in black to give the impression of a shadow. 

Reference photos of Olena provided to us by Yle and the character model made based on them. Also a pose from one of the animations for the last scene.

For the animations, I used Mixamo, which has a large library of different styles of animations. I knew I needed animations that were rather realistic than stylized and that were looping, since the scenes in the story were quite long. Most of the usable animations were very short, which meant I needed to stitch them together by hand in Blender.

This is where I ran into my biggest problem in this whole project. The Mixamo animations that were brought into Blender worked fine, until trying to stitch them together. Somehow some of the animations were in the mesh itself and not in the rig. I did not have the skillset to solve this and got help from one of our lovely Art Leads, Emmi Isokirmo. Solving this problem took a couple days, but I had prepared for this by scheduling a lot of time for the animations since I knew that they might be time consuming.

Final thoughts

All in all, I’m very pleased with my execution of this project and how I was able to handle the tight deadline. As an artist, it is always hard to finalize the project, when you know you could keep working on making something more visually pleasing and optimized. This project definitely tested me and taught me to let things go and to keep on moving. I was also reminded of the importance of clear and efficient communication, and I’m thankful that I was partnered up with Irina. If we had any worries about the deadline or technical difficulties, we could rely on each other for support.

And now, passing the mic to Irina, who will share more about the programming side of this project:

Developing the WebAR experience with 8th Wall

Right from the beginning, we had to figure out how to make our WebAR experience work on both Android and iOS in the browser. Because Three.js WebXRManager uses the WebXR API that is currently not supported on iOS, we solved the issue by using the 8th Wall platform. It features SLAM (Simultaneous Localization Mapping) technology for world tracking used for creating real time WebAR experiences for the browser. Initialising the project was fairly simple, as it came from an 8th Wall template, but it still took some time to get used to the editor and working with WebAR development in general. The code was all written in JavaScript and the Three.js library for rendering 3D objects, animations and visual effects. 

8th Wall online editor from left to right: code structure, SceneLoader.js code and simulator preview.

How it all came together in the app

After the Stabilisation Point photogrammetry model was loaded to the app, I added some lighting and assets. Then, I added interactions to the assets to display information to the user.To make the interactable assets stand out, I used Three.js Effect Composer to create an outline effect through post-processing.

As more assets were added, it became apparent that some older devices can’t handle loading as many models and textures. This was fixed with reducing texture sizes of the room model and assets. In general, I found the debugging process in 8th Wall a bit challenging, because the app should be built each time for previewing.

This application consists of multiple scenes taking place in the same physical space, but features a distinct set of characters, lighting, animations and media. That meant spending a lot of time designing how scenes transition and how the app handles different configurations. I created SceneLoader for managing scene changes and showing the properties of the active scene. Once the structure of the project was set, it was much easier to add and change scene properties.

Conclusion

Overall, I have learned a lot during this project, including Web AR development, which is something I have been interested to learn about for a while! I also learned so many new things about Three.js, and found the post-processing especially interesting. I was very glad to work with Jessica. We were able to solve problems together and had efficient communication, which was very important under a tight schedule.

Key Stakeholders and Collaboration

The project ‘Shadows of the Stabilisation Point’ was carried out in collaboration with Yle Ulkolinja and Helsinki XR Center’s Virtual Creation Core initiative. The goal of this project was to document the harsh realities faced on a Ukrainian frontline clinic. The immersive, yet moving spatial journalism implementation combines videos, photos, and interviews of the medicall staff and patients, and allows the user to explore the clinic virtually on a mobile device.  The stabilization point has been modeled using photogrammetry techniques based on real photos of the clinic. The 3D representations of individuals were added afterward to bring the experience to life.

The experience is available both in Finnish and English on Yle’s website. Please note that the experience contains material that may be disturbing.

This project is a collaboration of:

To see previous news about our trainees’ projects, head over to the Trainee news section.

Follow us on social media for more posts: Facebook | LinkedIn | Twitter | Instagram