We present TeslaWay, a real-time 3D autonomous vehicle simulation built entirely in pure JavaScript using the HTML5 Canvas 2D API with zero external dependencies. The system renders procedurally generated road environments with perspective projection, simulates a multi-modal sensor array including 8 cameras, 4 LiDAR units with 360-degree point cloud visualization, 360-degree radar, and 12 ultrasonic sensors, and implements AI-controlled traffic vehicles with dynamic traffic light state machines. A cinematic heads-up display presents real-time telemetry including speed, battery level, object detection confidence scores across five categories (lanes, vehicles, pedestrians, traffic lights, road signs), and a radar minimap. The entire simulation is contained in a single HTML file requiring no build process, server infrastructure, or external libraries, achieving consistent 60 FPS rendering across modern browsers. TeslaWay demonstrates that visually compelling autonomous driving simulations can be implemented using only native web platform APIs, making AV technology concepts accessible to a broad audience through the browser.
The rapid advancement of autonomous vehicle (AV) technology by companies such as Tesla, Waymo, and Cruise has generated tremendous public interest in how self-driving cars perceive and navigate their environment. However, the underlying technology—multi-sensor fusion, real-time object detection, LiDAR point cloud processing, and AI-driven decision-making—remains opaque to most observers. Professional-grade AV simulators like CARLA [1], LGSVL [2], and NVIDIA DRIVE Sim [3] require significant computational resources, complex installation procedures, and domain expertise to operate.
TeslaWay addresses this accessibility gap by implementing a complete autonomous driving simulation entirely within the web browser. The system visualizes the key components of a self-driving perception pipeline—cameras, LiDAR, radar, ultrasonic sensors, object detection, and decision-making—through an interactive, cinematic first-person driving experience. By using only the HTML5 Canvas 2D API without any external JavaScript libraries, frameworks, or build tools, TeslaWay achieves maximum portability and accessibility: any device with a modern web browser can run the simulation instantly.
The contributions of this paper are as follows: (1) a custom 3D perspective projection engine implemented entirely in Canvas 2D without WebGL or external 3D libraries, (2) a comprehensive multi-sensor simulation framework visualizing LiDAR, radar, camera, and ultrasonic sensor data simultaneously, (3) an AI traffic management system with dynamic traffic light state machines, (4) a cinematic HUD system presenting real-time perception pipeline telemetry, and (5) a demonstration that complex AV simulations are feasible within the zero-dependency, single-file web development paradigm.
CARLA (Car Learning to Act) [1] provides an open-source urban driving simulator built on Unreal Engine 4, offering high-fidelity rendering, configurable sensor suites, and scenario scripting. LGSVL Simulator [2] targets autonomous driving development with Unity-based rendering and support for ROS/ROS2 integration. NVIDIA DRIVE Sim [3] provides cloud-based, physically accurate sensor simulation for production AV development. While these platforms offer unmatched fidelity, they require dedicated GPU hardware, significant storage, and substantial setup time.
Browser-based pseudo-3D racing games have existed since the early days of JavaScript, with seminal work by Wistrom [4] on road rendering techniques using Canvas. However, these implementations focus on entertainment rather than AV technology visualization, lacking sensor simulation, object detection displays, and perception pipeline telemetry. TeslaWay extends the pseudo-3D rendering paradigm with comprehensive AV sensor visualization, producing an educational tool rather than a game.
Tesla's Full Self-Driving (FSD) system [5] provides a real-time visualization to the driver showing detected objects, lane markings, traffic signals, and the planned vehicle path. TeslaWay draws inspiration from this visualization paradigm, recreating the cinematic quality and information density of Tesla's dashboard display in a browser-accessible format.
TeslaWay is architected as a single-file web application with all rendering logic, simulation state, sensor systems, and UI elements contained within one index.html file. The architecture comprises seven primary subsystems:
| Component | Responsibility | Technology |
|---|---|---|
| 3D Road Generator | Procedural road segments with curves, intersections, lane markings | Custom perspective projection |
| Environment Renderer | Buildings, trees, starfield, moon, streetlights | Canvas 2D with depth sorting |
| Sensor Engine | LiDAR sweep, radar scan, camera feeds, ultrasonic | Procedural visualization |
| AI Traffic Manager | Vehicle spawning, lane-following, traffic light compliance | State machine + distance sensing |
| Object Detector | Bounding boxes, confidence scores, classification | Bounded random walk algorithm |
| HUD System | Speed, battery, sensors, detection accuracy, minimap | Canvas text/shape overlay |
| Intro Sequence | Tesla logo animation, countdown, system boot | CSS + Canvas animation |
The main rendering loop executes at 60 FPS using requestAnimationFrame. Each frame follows a strict rendering order to ensure correct visual compositing: sky and environment (furthest layer), road surface with lane markings, 3D buildings and trees (depth-sorted), AI traffic vehicles, LiDAR point cloud overlay, object detection bounding boxes, HUD elements (nearest layer), and post-processing effects (scanlines, vignette).
The entire application uses only native browser APIs: Canvas 2D for rendering, addEventListener for input handling, and CSS for the intro sequence animations. The only network dependency is Google Fonts (Orbitron and Inter typefaces) for HUD typography. After initial font loading, the simulation operates entirely offline, enabling demonstrations in environments without internet connectivity.
The core rendering challenge in TeslaWay is producing convincing 3D perspective views using only the Canvas 2D API, which provides no native support for 3D transformations, vertex buffers, or shader programs available in WebGL. TeslaWay implements a custom perspective projection engine that transforms world-space coordinates into screen-space positions.
The projection follows the standard pinhole camera model where world-space points (x, y, z) are projected to screen-space coordinates (sx, sy) through perspective division. The camera is positioned at the ego vehicle's location, looking forward along the road. The field of view is tuned to approximate a dashboard-mounted camera, balancing visual drama with geometric accuracy.
Roads are generated procedurally as a sequence of segments, each defined by type (straight, left curve, right curve, intersection), length, and transition parameters. Road curvature is achieved by applying sinusoidal horizontal offsets to the centerline as a function of depth. Lane markings are rendered as dashed yellow center lines and solid white edge lines, with perspective foreshortening creating the illusion of depth.
| Segment Type | Description | Visual Features |
|---|---|---|
| Straight | Linear road section | Parallel lane markings, constant width |
| Left Curve | Road curves to the left | Sinusoidal offset, banking effect |
| Right Curve | Road curves to the right | Sinusoidal offset, banking effect |
| 4-Way Intersection | Cross-road with crosswalks | Crosswalk lines, traffic signals, widened road |
| 3-Way Intersection | T-junction | Partial crosswalks, directional signals |
The urban environment is composed of procedurally placed buildings and trees along the road edges. Buildings are rendered as colored rectangles with illuminated window grids, using depth-sorted rendering to ensure correct occlusion. The night-time sky features an animated starfield and a moon rendered with subtle shading. Buildings closer to the camera receive more detailed window patterns, while distant buildings use simplified flat-colored rectangles, implementing a basic level-of-detail optimization.
TeslaWay simulates the complete sensor suite of a modern autonomous vehicle, visualizing the data streams from eight distinct sensor types operating simultaneously.
| Sensor Type | Count | Range | Visualization |
|---|---|---|---|
| Camera | 8 | 250 m | Object bounding boxes with classification |
| LiDAR | 4 | 200 m | Point cloud overlay with color-coded distance |
| Radar | 1 (360°) | 300 m | Minimap with range rings and object blips |
| Ultrasonic | 12 | 8 m | Status indicator in HUD panel |
| GPS | 1 | Global | Coordinate display and location label |
The LiDAR system is the most visually prominent sensor visualization in TeslaWay. Four virtual LiDAR units generate a 360-degree point cloud that is overlaid on the 3D scene. The visualization includes a rotating sweep animation that continuously scans the environment, generating point returns from road surfaces, buildings, traffic vehicles, and environmental features.
LiDAR points are rendered as small circles with color coding based on distance: near-field returns (0–50 m) appear in cyan, mid-range returns (50–120 m) in blue, and far-field returns (120–200 m) in dark blue with reduced opacity. The point density decreases with distance, mimicking the angular resolution limitations of real LiDAR sensors. Users can toggle the LiDAR overlay on and off using the L key, enabling comparison between the raw visual scene and the sensor-augmented view.
The radar system is visualized through a circular minimap positioned in the bottom-right corner of the display. The minimap shows a top-down view centered on the ego vehicle, with concentric range rings at regular intervals. Detected objects appear as colored blips: the ego vehicle in green, other traffic vehicles in red, and road edges as faint arcs. A sweeping line rotates continuously around the minimap, simulating the radar's scan pattern.
The HUD displays a neural network processing indicator showing version (v4.7.2), inference status, and processing latency. While this does not represent actual neural network computation, it communicates to viewers the role of deep learning in autonomous driving perception pipelines and the real-time processing requirements of production AV systems.
TeslaWay implements an AI-driven traffic system that populates the road with autonomous vehicles exhibiting realistic driving behaviors.
AI traffic vehicles are spawned at configurable intervals and placed in available lanes. Each vehicle maintains its own speed, acceleration profile, and lane position. Vehicles exhibit the following behaviors: constant-speed cruising with minor speed variations, traffic light compliance (deceleration for red, proceed for green, decision-making for yellow), forward distance sensing to prevent collisions with vehicles ahead, and tail light rendering that intensifies during braking events.
Traffic signals at intersections operate as finite state machines cycling through red, yellow, and green states with configurable durations. The state machine manages countdown timers and transitions, with the current state influencing AI vehicle behavior. Signals are rendered as 3D-projected housings with colored light indicators that cast subtle glow effects onto the surrounding road surface.
Each AI vehicle monitors the distance to the vehicle directly ahead in its lane, adjusting speed to maintain safe following distance. This produces natural traffic flow patterns including accordion-like compression and expansion of traffic density at intersections and curves, preventing vehicle overlap in the rendered scene.
The object detection visualization presents real-time classification results across five primary categories, simulating the output of a production perception pipeline.
| Category | Bounding Box Color | Confidence Range | Detection Source |
|---|---|---|---|
| Lane Markings | Green | 92–99% | Camera + LiDAR fusion |
| Vehicles | Blue | 88–98% | Camera + Radar + LiDAR |
| Pedestrians | Yellow | 85–96% | Camera + Ultrasonic |
| Traffic Lights | Red/Amber/Green | 90–99% | Camera (primary) |
| Road Signs | Cyan | 87–97% | Camera (primary) |
Confidence scores are generated using a bounded random walk algorithm. Each detection category maintains a running confidence value that is perturbed each frame by a small random delta, clamped to the category's valid range. This produces smooth, fluctuating confidence values characteristic of real neural network inference, where minor frame-to-frame variations in input data cause corresponding variations in output confidence. Scores are displayed in the HUD with color coding: green for high confidence (>90%), yellow for moderate (70–90%), and red for low (<70%).
Detected objects in the 3D scene are annotated with perspective-projected bounding boxes rendered as semi-transparent colored rectangles with solid borders. A label tag at the top of each box displays the class name and confidence percentage. Boxes are drawn after scene rendering but before the HUD overlay, ensuring integration with the 3D scene while remaining visually distinct.
The HUD is a defining visual element of TeslaWay, providing continuous real-time information overlays inspired by Tesla's FSD visualization paradigm.
| Component | Position | Information |
|---|---|---|
| Speed Indicator | Bottom center | Current speed with unit label |
| Battery Display | Bottom area | Battery percentage with visual bar |
| Location Display | Top left | Street name, GPS coordinates |
| Sensor Status Panel | Top right | Active sensor count per type |
| Object Detection Panel | Right side | Per-category accuracy percentages |
| Radar Minimap | Bottom right | Top-down view with range rings |
| AI Decision Text | Center area | Current driving decision narrative |
| Autopilot Status | Top area | Engagement status and NN version |
The HUD employs the Orbitron typeface for primary display elements, chosen for its geometric, futuristic aesthetic aligned with automotive instrument cluster design. Secondary text uses the Inter typeface for readability at smaller sizes. All text is rendered onto the Canvas using fillText with appropriate font sizing, color, and alpha blending. Color coding follows AV visualization conventions: green for healthy/active status, amber for caution, red for alerts, and cyan for informational displays and LiDAR data.
The HUD cycles through simulated Bay Area driving locations including Market Street in San Francisco and Highway 101 North near Palo Alto. Each location displays corresponding GPS coordinate ranges and street name labels, enhancing realism. Location transitions occur smoothly as the simulation progresses, simulating a continuous drive through Tesla's home territory.
TeslaWay opens with a cinematic introduction sequence designed to establish visual identity before the driving simulation begins. The sequence consists of three phases:
Upon countdown completion, the simulation transitions seamlessly to the live driving view with all systems active. A notification system displays status messages ("AUTOPILOT ENGAGED", "ALL SYSTEMS NOMINAL") to reinforce the autonomous driving narrative.
TeslaWay applies several post-processing effects to achieve its cinematic visual quality:
shadowBlur and shadowColor properties to create soft glow effects, reinforcing the high-tech aesthetic.Achieving 60 FPS within the Canvas 2D API requires careful optimization. TeslaWay employs object pooling for traffic vehicles and LiDAR points to reduce garbage collection pressure, frustum culling to skip off-screen geometry, level-of-detail reduction for distant buildings, batched Canvas state changes to minimize context switches, and dirty region tracking for infrequently changing HUD elements.
The night scene uses a simplified lighting model with ambient, directional, and point light contributions. Street lights and vehicle headlights act as point sources with inverse-square falloff approximated through radial Canvas gradients. Building windows emit warm-toned light with varying intensities, creating a living urban environment. The combination produces visually rich night driving scenes with pools of illumination and atmospheric depth.
The simulation makes extensive use of advanced Canvas 2D features:
lighter, screen) for additive light blending without custom shadersbeginPath, moveTo, lineTo, quadraticCurveTo, arc) for complex geometry including the Tesla logo and vehicle silhouettessave, restore, translate, rotate) for radar sweep and LiDAR rotation animationsshadowBlur and shadowColor for glow effects around HUD text and light sourcesTeslaWay provides a minimal control scheme reflecting its autonomous nature:
| Input | Action | Effect |
|---|---|---|
| Arrow Up | Increase Speed | Accelerates ego vehicle (±10 km/h increments), updates speedometer |
| Arrow Down | Decrease Speed | Decelerates ego vehicle with smooth deceleration curves |
| L Key | Toggle LiDAR | Enables/disables LiDAR point cloud overlay with opacity transition |
Steering, lane keeping, and traffic navigation are handled entirely by the AI system. The deliberate simplicity of user controls emphasizes the autonomous nature of the simulation and focuses attention on the perception pipeline visualization rather than driving mechanics.
| Hardware | Browser | Resolution | Avg FPS | Frame Time |
|---|---|---|---|---|
| MacBook Pro M2 | Chrome 120 | 1920×1080 | 60 | ~8 ms |
| MacBook Pro M2 | Safari 17 | 1920×1080 | 60 | ~7 ms |
| Desktop i7-12700K | Chrome 120 | 1920×1080 | 60 | ~10 ms |
| Mid-range Laptop | Firefox 121 | 1366×768 | 55–60 | ~14 ms |
| Mobile (iPhone 15) | Safari Mobile | 390×844 | 50–60 | ~16 ms |
TeslaWay maintains interactive frame rates across diverse hardware, validating the Canvas 2D API as a viable rendering target for real-time 3D simulation with appropriate optimization. The single-file architecture results in approximately 45–55 KB total size with no additional assets. Memory consumption remains stable at 30–50 MB throughout extended sessions with no memory leaks, thanks to the object pooling strategy.
The zero-dependency approach yields several practical benefits: zero supply-chain risk with no npm dependencies that could introduce vulnerabilities, guaranteed long-term compatibility as the Canvas 2D API is a stable web standard, no build step friction making the codebase accessible to developers of all experience levels, and instant deployment through simple file hosting.
TeslaWay communicates key AV concepts: the role of multi-sensor fusion in environmental perception, sensor types and configurations in self-driving vehicles, real-time processing requirements of perception pipelines, object detection and classification in dynamic environments, and the information density that AV systems must process continuously. The interactive, browser-based format makes these concepts accessible without specialized hardware or software.
The Canvas 2D rendering does not achieve the geometric accuracy of polygon-based 3D engines. Without WebGL, GPU-accelerated geometry processing is unavailable, limiting scene complexity. Sensor simulations are visualization-focused rather than physically accurate: LiDAR point distributions, radar returns, and camera models do not incorporate realistic noise models or sensor-specific artifacts. AI traffic behavior does not model complex interactions such as unprotected left turns or pedestrian yielding. These limitations are acceptable given the system's purpose as a demonstration and educational tool.
TeslaWay demonstrates that a comprehensive, visually compelling autonomous vehicle simulation can be implemented entirely within the browser using only native web platform APIs. The system renders 3D perspective road environments, simulates a multi-modal sensor array with LiDAR point cloud visualization, implements AI traffic management with dynamic traffic light state machines, provides real-time object detection with confidence scoring, and presents all information through a cinematic heads-up display—all within a single HTML file requiring no build process or server infrastructure.
Future directions include: expanding the road network with highway merges and multi-lane configurations, implementing physically-based sensor noise models, adding weather effects (rain, fog, snow) affecting both rendering and simulated sensor degradation, incorporating path planning visualization, extending the urban environment with pedestrians and cyclists, and exploring optional WebGL rendering as a progressive enhancement.
The complete source code is available at https://github.com/romizone/teslaway under the MIT license, and a live demonstration is accessible at https://teslaway.vercel.app/.