TeslaWay: A Real-Time 3D Autonomous Vehicle Simulation with Multi-Sensor Visualization in Pure JavaScript Canvas

Romi Nur Ismanto
Independent AI Research Lab
rominur@gmail.com
February 2026

Abstract

We present TeslaWay, a real-time 3D autonomous vehicle simulation built entirely in pure JavaScript using the HTML5 Canvas 2D API with zero external dependencies. The system renders procedurally generated road environments with perspective projection, simulates a multi-modal sensor array including 8 cameras, 4 LiDAR units with 360-degree point cloud visualization, 360-degree radar, and 12 ultrasonic sensors, and implements AI-controlled traffic vehicles with dynamic traffic light state machines. A cinematic heads-up display presents real-time telemetry including speed, battery level, object detection confidence scores across five categories (lanes, vehicles, pedestrians, traffic lights, road signs), and a radar minimap. The entire simulation is contained in a single HTML file requiring no build process, server infrastructure, or external libraries, achieving consistent 60 FPS rendering across modern browsers. TeslaWay demonstrates that visually compelling autonomous driving simulations can be implemented using only native web platform APIs, making AV technology concepts accessible to a broad audience through the browser.

Keywords: autonomous driving simulation, self-driving visualization, LiDAR point cloud, Canvas 2D, 3D perspective projection, sensor fusion, object detection, HUD, JavaScript, zero dependencies

1. Introduction

The rapid advancement of autonomous vehicle (AV) technology by companies such as Tesla, Waymo, and Cruise has generated tremendous public interest in how self-driving cars perceive and navigate their environment. However, the underlying technology—multi-sensor fusion, real-time object detection, LiDAR point cloud processing, and AI-driven decision-making—remains opaque to most observers. Professional-grade AV simulators like CARLA [1], LGSVL [2], and NVIDIA DRIVE Sim [3] require significant computational resources, complex installation procedures, and domain expertise to operate.

TeslaWay addresses this accessibility gap by implementing a complete autonomous driving simulation entirely within the web browser. The system visualizes the key components of a self-driving perception pipeline—cameras, LiDAR, radar, ultrasonic sensors, object detection, and decision-making—through an interactive, cinematic first-person driving experience. By using only the HTML5 Canvas 2D API without any external JavaScript libraries, frameworks, or build tools, TeslaWay achieves maximum portability and accessibility: any device with a modern web browser can run the simulation instantly.

The contributions of this paper are as follows: (1) a custom 3D perspective projection engine implemented entirely in Canvas 2D without WebGL or external 3D libraries, (2) a comprehensive multi-sensor simulation framework visualizing LiDAR, radar, camera, and ultrasonic sensor data simultaneously, (3) an AI traffic management system with dynamic traffic light state machines, (4) a cinematic HUD system presenting real-time perception pipeline telemetry, and (5) a demonstration that complex AV simulations are feasible within the zero-dependency, single-file web development paradigm.

2. Related Work

2.1 Professional AV Simulators

CARLA (Car Learning to Act) [1] provides an open-source urban driving simulator built on Unreal Engine 4, offering high-fidelity rendering, configurable sensor suites, and scenario scripting. LGSVL Simulator [2] targets autonomous driving development with Unity-based rendering and support for ROS/ROS2 integration. NVIDIA DRIVE Sim [3] provides cloud-based, physically accurate sensor simulation for production AV development. While these platforms offer unmatched fidelity, they require dedicated GPU hardware, significant storage, and substantial setup time.

2.2 Browser-Based Driving Simulations

Browser-based pseudo-3D racing games have existed since the early days of JavaScript, with seminal work by Wistrom [4] on road rendering techniques using Canvas. However, these implementations focus on entertainment rather than AV technology visualization, lacking sensor simulation, object detection displays, and perception pipeline telemetry. TeslaWay extends the pseudo-3D rendering paradigm with comprehensive AV sensor visualization, producing an educational tool rather than a game.

2.3 Tesla FSD Visualization

Tesla's Full Self-Driving (FSD) system [5] provides a real-time visualization to the driver showing detected objects, lane markings, traffic signals, and the planned vehicle path. TeslaWay draws inspiration from this visualization paradigm, recreating the cinematic quality and information density of Tesla's dashboard display in a browser-accessible format.

3. System Architecture

TeslaWay is architected as a single-file web application with all rendering logic, simulation state, sensor systems, and UI elements contained within one index.html file. The architecture comprises seven primary subsystems:

Initialization → 3D Road Generator → Sensor Simulation Engine → AI Traffic Manager → Object Detector → HUD Renderer → Frame Compositor
Table 1: System component overview
Component Responsibility Technology
3D Road Generator Procedural road segments with curves, intersections, lane markings Custom perspective projection
Environment Renderer Buildings, trees, starfield, moon, streetlights Canvas 2D with depth sorting
Sensor Engine LiDAR sweep, radar scan, camera feeds, ultrasonic Procedural visualization
AI Traffic Manager Vehicle spawning, lane-following, traffic light compliance State machine + distance sensing
Object Detector Bounding boxes, confidence scores, classification Bounded random walk algorithm
HUD System Speed, battery, sensors, detection accuracy, minimap Canvas text/shape overlay
Intro Sequence Tesla logo animation, countdown, system boot CSS + Canvas animation

3.1 Rendering Pipeline

The main rendering loop executes at 60 FPS using requestAnimationFrame. Each frame follows a strict rendering order to ensure correct visual compositing: sky and environment (furthest layer), road surface with lane markings, 3D buildings and trees (depth-sorted), AI traffic vehicles, LiDAR point cloud overlay, object detection bounding boxes, HUD elements (nearest layer), and post-processing effects (scanlines, vignette).

3.2 Zero-Dependency Constraint

The entire application uses only native browser APIs: Canvas 2D for rendering, addEventListener for input handling, and CSS for the intro sequence animations. The only network dependency is Google Fonts (Orbitron and Inter typefaces) for HUD typography. After initial font loading, the simulation operates entirely offline, enabling demonstrations in environments without internet connectivity.

4. 3D Perspective Projection Engine

The core rendering challenge in TeslaWay is producing convincing 3D perspective views using only the Canvas 2D API, which provides no native support for 3D transformations, vertex buffers, or shader programs available in WebGL. TeslaWay implements a custom perspective projection engine that transforms world-space coordinates into screen-space positions.

4.1 Projection Model

The projection follows the standard pinhole camera model where world-space points (x, y, z) are projected to screen-space coordinates (sx, sy) through perspective division. The camera is positioned at the ego vehicle's location, looking forward along the road. The field of view is tuned to approximate a dashboard-mounted camera, balancing visual drama with geometric accuracy.

4.2 Road Segment Generation

Roads are generated procedurally as a sequence of segments, each defined by type (straight, left curve, right curve, intersection), length, and transition parameters. Road curvature is achieved by applying sinusoidal horizontal offsets to the centerline as a function of depth. Lane markings are rendered as dashed yellow center lines and solid white edge lines, with perspective foreshortening creating the illusion of depth.

Table 2: Road segment types
Segment Type Description Visual Features
Straight Linear road section Parallel lane markings, constant width
Left Curve Road curves to the left Sinusoidal offset, banking effect
Right Curve Road curves to the right Sinusoidal offset, banking effect
4-Way Intersection Cross-road with crosswalks Crosswalk lines, traffic signals, widened road
3-Way Intersection T-junction Partial crosswalks, directional signals

4.3 Environment Rendering

The urban environment is composed of procedurally placed buildings and trees along the road edges. Buildings are rendered as colored rectangles with illuminated window grids, using depth-sorted rendering to ensure correct occlusion. The night-time sky features an animated starfield and a moon rendered with subtle shading. Buildings closer to the camera receive more detailed window patterns, while distant buildings use simplified flat-colored rectangles, implementing a basic level-of-detail optimization.

5. Multi-Sensor Simulation Framework

TeslaWay simulates the complete sensor suite of a modern autonomous vehicle, visualizing the data streams from eight distinct sensor types operating simultaneously.

Table 3: Simulated sensor array specification
Sensor Type Count Range Visualization
Camera 8 250 m Object bounding boxes with classification
LiDAR 4 200 m Point cloud overlay with color-coded distance
Radar 1 (360°) 300 m Minimap with range rings and object blips
Ultrasonic 12 8 m Status indicator in HUD panel
GPS 1 Global Coordinate display and location label

5.1 LiDAR Point Cloud Visualization

The LiDAR system is the most visually prominent sensor visualization in TeslaWay. Four virtual LiDAR units generate a 360-degree point cloud that is overlaid on the 3D scene. The visualization includes a rotating sweep animation that continuously scans the environment, generating point returns from road surfaces, buildings, traffic vehicles, and environmental features.

LiDAR points are rendered as small circles with color coding based on distance: near-field returns (0–50 m) appear in cyan, mid-range returns (50–120 m) in blue, and far-field returns (120–200 m) in dark blue with reduced opacity. The point density decreases with distance, mimicking the angular resolution limitations of real LiDAR sensors. Users can toggle the LiDAR overlay on and off using the L key, enabling comparison between the raw visual scene and the sensor-augmented view.

5.2 Radar Minimap

The radar system is visualized through a circular minimap positioned in the bottom-right corner of the display. The minimap shows a top-down view centered on the ego vehicle, with concentric range rings at regular intervals. Detected objects appear as colored blips: the ego vehicle in green, other traffic vehicles in red, and road edges as faint arcs. A sweeping line rotates continuously around the minimap, simulating the radar's scan pattern.

5.3 Neural Network Status

The HUD displays a neural network processing indicator showing version (v4.7.2), inference status, and processing latency. While this does not represent actual neural network computation, it communicates to viewers the role of deep learning in autonomous driving perception pipelines and the real-time processing requirements of production AV systems.

6. AI Traffic Management

TeslaWay implements an AI-driven traffic system that populates the road with autonomous vehicles exhibiting realistic driving behaviors.

6.1 Vehicle Behavior

AI traffic vehicles are spawned at configurable intervals and placed in available lanes. Each vehicle maintains its own speed, acceleration profile, and lane position. Vehicles exhibit the following behaviors: constant-speed cruising with minor speed variations, traffic light compliance (deceleration for red, proceed for green, decision-making for yellow), forward distance sensing to prevent collisions with vehicles ahead, and tail light rendering that intensifies during braking events.

6.2 Traffic Light State Machine

Traffic signals at intersections operate as finite state machines cycling through red, yellow, and green states with configurable durations. The state machine manages countdown timers and transitions, with the current state influencing AI vehicle behavior. Signals are rendered as 3D-projected housings with colored light indicators that cast subtle glow effects onto the surrounding road surface.

6.3 Collision Avoidance

Each AI vehicle monitors the distance to the vehicle directly ahead in its lane, adjusting speed to maintain safe following distance. This produces natural traffic flow patterns including accordion-like compression and expansion of traffic density at intersections and curves, preventing vehicle overlap in the rendered scene.

7. Object Detection System

The object detection visualization presents real-time classification results across five primary categories, simulating the output of a production perception pipeline.

Table 4: Object detection categories and confidence ranges
Category Bounding Box Color Confidence Range Detection Source
Lane Markings Green 92–99% Camera + LiDAR fusion
Vehicles Blue 88–98% Camera + Radar + LiDAR
Pedestrians Yellow 85–96% Camera + Ultrasonic
Traffic Lights Red/Amber/Green 90–99% Camera (primary)
Road Signs Cyan 87–97% Camera (primary)

7.1 Confidence Score Generation

Confidence scores are generated using a bounded random walk algorithm. Each detection category maintains a running confidence value that is perturbed each frame by a small random delta, clamped to the category's valid range. This produces smooth, fluctuating confidence values characteristic of real neural network inference, where minor frame-to-frame variations in input data cause corresponding variations in output confidence. Scores are displayed in the HUD with color coding: green for high confidence (>90%), yellow for moderate (70–90%), and red for low (<70%).

7.2 Bounding Box Rendering

Detected objects in the 3D scene are annotated with perspective-projected bounding boxes rendered as semi-transparent colored rectangles with solid borders. A label tag at the top of each box displays the class name and confidence percentage. Boxes are drawn after scene rendering but before the HUD overlay, ensuring integration with the 3D scene while remaining visually distinct.

8. Heads-Up Display System

The HUD is a defining visual element of TeslaWay, providing continuous real-time information overlays inspired by Tesla's FSD visualization paradigm.

Table 5: HUD component inventory
Component Position Information
Speed Indicator Bottom center Current speed with unit label
Battery Display Bottom area Battery percentage with visual bar
Location Display Top left Street name, GPS coordinates
Sensor Status Panel Top right Active sensor count per type
Object Detection Panel Right side Per-category accuracy percentages
Radar Minimap Bottom right Top-down view with range rings
AI Decision Text Center area Current driving decision narrative
Autopilot Status Top area Engagement status and NN version

8.1 Typography

The HUD employs the Orbitron typeface for primary display elements, chosen for its geometric, futuristic aesthetic aligned with automotive instrument cluster design. Secondary text uses the Inter typeface for readability at smaller sizes. All text is rendered onto the Canvas using fillText with appropriate font sizing, color, and alpha blending. Color coding follows AV visualization conventions: green for healthy/active status, amber for caution, red for alerts, and cyan for informational displays and LiDAR data.

8.2 Location Cycling

The HUD cycles through simulated Bay Area driving locations including Market Street in San Francisco and Highway 101 North near Palo Alto. Each location displays corresponding GPS coordinate ranges and street name labels, enhancing realism. Location transitions occur smoothly as the simulation progresses, simulating a continuous drive through Tesla's home territory.

9. Cinematic Introduction Sequence

TeslaWay opens with a cinematic introduction sequence designed to establish visual identity before the driving simulation begins. The sequence consists of three phases:

  1. Tesla Logo Animation: The Tesla logo is rendered procedurally using Canvas path operations with fade-in and scaling effects, maintaining the zero-dependency constraint by avoiding image assets.
  2. Loading Bar: A progress bar animation simulates system initialization, building anticipation for the main simulation.
  3. Countdown Timer: A 3-second countdown (3, 2, 1) rendered in Orbitron typeface with scaling animations. During this phase, the simulation world is initialized and the 3D scene is prepared.

Upon countdown completion, the simulation transitions seamlessly to the live driving view with all systems active. A notification system displays status messages ("AUTOPILOT ENGAGED", "ALL SYSTEMS NOMINAL") to reinforce the autonomous driving narrative.

10. Post-Processing Effects

TeslaWay applies several post-processing effects to achieve its cinematic visual quality:

11. Implementation Details

11.1 Rendering Optimization

Achieving 60 FPS within the Canvas 2D API requires careful optimization. TeslaWay employs object pooling for traffic vehicles and LiDAR points to reduce garbage collection pressure, frustum culling to skip off-screen geometry, level-of-detail reduction for distant buildings, batched Canvas state changes to minimize context switches, and dirty region tracking for infrequently changing HUD elements.

11.2 Night Lighting Model

The night scene uses a simplified lighting model with ambient, directional, and point light contributions. Street lights and vehicle headlights act as point sources with inverse-square falloff approximated through radial Canvas gradients. Building windows emit warm-toned light with varying intensities, creating a living urban environment. The combination produces visually rich night driving scenes with pools of illumination and atmospheric depth.

11.3 Canvas API Techniques

The simulation makes extensive use of advanced Canvas 2D features:

12. User Interaction

TeslaWay provides a minimal control scheme reflecting its autonomous nature:

Table 6: User input controls
Input Action Effect
Arrow Up Increase Speed Accelerates ego vehicle (±10 km/h increments), updates speedometer
Arrow Down Decrease Speed Decelerates ego vehicle with smooth deceleration curves
L Key Toggle LiDAR Enables/disables LiDAR point cloud overlay with opacity transition

Steering, lane keeping, and traffic navigation are handled entirely by the AI system. The deliberate simplicity of user controls emphasizes the autonomous nature of the simulation and focuses attention on the perception pipeline visualization rather than driving mechanics.

13. Performance Analysis

Table 7: Rendering performance across hardware configurations
Hardware Browser Resolution Avg FPS Frame Time
MacBook Pro M2 Chrome 120 1920×1080 60 ~8 ms
MacBook Pro M2 Safari 17 1920×1080 60 ~7 ms
Desktop i7-12700K Chrome 120 1920×1080 60 ~10 ms
Mid-range Laptop Firefox 121 1366×768 55–60 ~14 ms
Mobile (iPhone 15) Safari Mobile 390×844 50–60 ~16 ms

TeslaWay maintains interactive frame rates across diverse hardware, validating the Canvas 2D API as a viable rendering target for real-time 3D simulation with appropriate optimization. The single-file architecture results in approximately 45–55 KB total size with no additional assets. Memory consumption remains stable at 30–50 MB throughout extended sessions with no memory leaks, thanks to the object pooling strategy.

14. Discussion

14.1 Zero-Dependency Advantages

The zero-dependency approach yields several practical benefits: zero supply-chain risk with no npm dependencies that could introduce vulnerabilities, guaranteed long-term compatibility as the Canvas 2D API is a stable web standard, no build step friction making the codebase accessible to developers of all experience levels, and instant deployment through simple file hosting.

14.2 Educational Value

TeslaWay communicates key AV concepts: the role of multi-sensor fusion in environmental perception, sensor types and configurations in self-driving vehicles, real-time processing requirements of perception pipelines, object detection and classification in dynamic environments, and the information density that AV systems must process continuously. The interactive, browser-based format makes these concepts accessible without specialized hardware or software.

14.3 Limitations

The Canvas 2D rendering does not achieve the geometric accuracy of polygon-based 3D engines. Without WebGL, GPU-accelerated geometry processing is unavailable, limiting scene complexity. Sensor simulations are visualization-focused rather than physically accurate: LiDAR point distributions, radar returns, and camera models do not incorporate realistic noise models or sensor-specific artifacts. AI traffic behavior does not model complex interactions such as unprotected left turns or pedestrian yielding. These limitations are acceptable given the system's purpose as a demonstration and educational tool.

15. Conclusion and Future Work

TeslaWay demonstrates that a comprehensive, visually compelling autonomous vehicle simulation can be implemented entirely within the browser using only native web platform APIs. The system renders 3D perspective road environments, simulates a multi-modal sensor array with LiDAR point cloud visualization, implements AI traffic management with dynamic traffic light state machines, provides real-time object detection with confidence scoring, and presents all information through a cinematic heads-up display—all within a single HTML file requiring no build process or server infrastructure.

Future directions include: expanding the road network with highway merges and multi-lane configurations, implementing physically-based sensor noise models, adding weather effects (rain, fog, snow) affecting both rendering and simulated sensor degradation, incorporating path planning visualization, extending the urban environment with pedestrians and cyclists, and exploring optional WebGL rendering as a progressive enhancement.

The complete source code is available at https://github.com/romizone/teslaway under the MIT license, and a live demonstration is accessible at https://teslaway.vercel.app/.

References

  1. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., & Koltun, V. (2017). CARLA: An Open Urban Driving Simulator. Proceedings of the 1st Annual Conference on Robot Learning (CoRL), pp. 1–16.
  2. Rong, G., Shin, B.H., Tabatabaee, H., et al. (2020). LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving. IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–6.
  3. NVIDIA Corporation. (2023). NVIDIA DRIVE Sim: Autonomous Vehicle Simulation Platform. NVIDIA Developer Documentation.
  4. Wistrom, J. (2014). How to Build a Racing Game: Pseudo-3D Road Rendering. Code Incomplete Blog Series.
  5. Tesla, Inc. (2024). Full Self-Driving (Supervised) Visualization System. Tesla AI Day Technical Documentation.
  6. HTML Living Standard. (2024). The Canvas 2D Rendering Context. WHATWG.
  7. Geiger, A., Lenz, P., & Urtasun, R. (2012). Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361.
  8. Sun, P., Kretzschmar, H., Dotiwalla, X., et al. (2020). Scalability in Perception for Autonomous Driving: Waymo Open Dataset. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2446–2454.
  9. Caesar, H., Bankiti, V., Lang, A.H., et al. (2020). nuScenes: A Multimodal Dataset for Autonomous Driving. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11621–11631.
  10. Kato, S., Tokunaga, S., Maruyama, Y., et al. (2018). Autoware on Board: Enabling Autonomous Vehicles with Embedded Systems. ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), pp. 287–296.
  11. Quigley, M., Conley, K., Gerkey, B., et al. (2009). ROS: An Open-Source Robot Operating System. ICRA Workshop on Open Source Software, Vol. 3, No. 3.2, p. 5.
  12. Grigorescu, S., Trasnea, B., Cocias, T., & Macesanu, G. (2020). A Survey of Deep Learning Techniques for Autonomous Driving. Journal of Field Robotics, 37(3), pp. 362–386.