A dynamic event capturing and rendering system collects and aggregates video, audio, positional, and motion data to create a comprehensive user perspective 360-degree rendering of a field of play. An object associated with a user collects data that is stitched together and synchronized to provide post event analysis and training. Through an interface actions that occurred during an event can be recreated providing the viewer with information on what the user associated with the object was experiencing, where the user was looking, and how certain actions may have changed the outcome. Using the collected data, a virtual realty environment is created that can be manipulated to present alternative courses of action and outcomes.