Overview
In 2016 VI-WorldSIM was developed as a software to automate the process of creating synthetic data for vehicle sensors to test against utilizing artificial intelligence. The technology helped advance vehicle safety with out the use of having dangerous untested automated vehicles on the road. However, the software it self was incredibly difficult to manage with out a UI or Visualization as it was essentially a computer program that had to be fed arguments through a command console.
Approach
I joined in 2018 when the back end of the software was solidified and the team was looking into creating a standalone application to visualize and author the synthetic data the main software was generating.
My first tasks on the project was creating a robust animation system and customizable character system for the "Actors" in the data like Vehicles, Pedestrians and Animals.
I transitioned into Managing a team of 3D contractors who would utilize my tools and pipeline for adding new content weekly while I also implemented the front end UI for our software including our Dynamic 3D scripting components and visualization methods.
UI and Dynamic Actor
Working very closely with the Engineering Team and Product Designer implemented all the UI and interactive features. This included all the widgets and responsive 3D elements like locators, bounding boxes and trajectories. I also developed the dynamic visual parameters of vehicle and pedestrian.
Developing a 3D asset library
After creating a parent actor with physics and modifiable visual parameters I gathered a small team of 3D modelers and guided them through the process of making optimized vehicle models for the product. I also created materials for the vehicles that utilized packed texture maps for masking so that each part of the vehicle could be modified through the engine itself and stay consistent to the other vehicles regardless of the creator and their process. These materials were also optimized to not be too taxing on the engine during run time.
Creating a modular pedestrian system
Using blend shapes, modular meshes, custom materials and bone deformations I created dynamic pedestrian actors that could randomize their weight, height, skin tone and apparel with out breaking animations and shared animations across classes.
Optimization and Animation Retargeting
Before adding these meshes I made sure to create LODs for the actors and used the same mesh geometry so that the LODs and rigs could be easily transferable to the other types of pedestrian and their clothes. I also created one master rig that would be used to create the animations for all the actors and instead of hand animating or starting from scratch I used constraints to recycle older animations.