Improving on the NVC static webpage scraper that I initially created, I wanted to create a dynamic scraper and display data over time. Since I have a keen interest in what Tesla is doing, I focused on their configuration pages.
How do they work?
The pages reliese on two components.
- Collect information: a python script using request and selenium loads the web page and browse through the DOM component based on my initial analysis. The script collects the data in a JSON object, which get saved on completion.
- Display information: a Next.js page create the static web page based on the JSON file, and Chart.js object. In order to review the data more easily, I have included button allowing to filter the charts based on an option prefix (P-, W-. I-, O-).
- Add seater option data to mdy page
- Use Object programming to define a car and options
- S & X trackers