You are monitoring a set of vehicles. They are sending lots of numbers (speed, lat/lng, temp, etc) and flags (engine on, AC on, etc). You need to store the information for several purposes
⚈ plotting travel
⚈ monitoring for flaws (over-temp, failed subsystems, etc)
⚈ vehicle theft
⚈ driver naughtiness (speeding)
Consider keeping the raw data as it comes in. Simply put into files (one per day or whatever). You should not need to ever look at the files except for the following. Rebuilding the tables.
Most of the raw data might be held in a table that is not much different than the raw data. This table is mostly redundant, and should be considered for removal after the system has been suitably debugged. It should be partitioned because it will be large, and partitioning is the best way to jettison 'old' data. (See ...)
This table shows where the vehicle is, in detail. It would have veh_id, lat/lng, but not many other columns.
For future use (that is, last year's data), the Location table can be summarized into a subset of lat/lng points. It would include any time the engine is off and might include both the off and on times. It could also include a row for each "stop", indicating things like stoplights. For significant turns (either due to routing or due to non-straight roads), a more complex algorithm is needed.
This table could be used much later to give an overview of the vehicle's traffic, even after the data in "Locations" has been purged.
Hopefully this is a small table. It would contain changes to the metrics that need reviewing for diagnostic and other purposes
⚈ Engine temperature
⚈ Speed (for seeing if driver is violating laws)
⚈ Alerts from OBD monitoring (various failures -- mostly implemented as flags
Numeric values need two thresholds -- when the value first goes above something ("the engine has started overheating"), then when it drops below some lower value ("it has cooled off now").
In a similar way, booleans should be recorded on any change.
"Debouncing" -- The two thresholds effectively avoids excessive number of rows in the table due to a valus oscillating around a single threshold. (Booleans may be trickier to debounce.)
Some ways to build the Alerts table.
⚈ key-value: Essentially veh_id, alert_name, alert_value, timestamp.
⚈ a wide row of most Fact data at the time of the alert. For example, AC-on and outside-temp have some impact on engine-overtemp.
⚈ A compromise -- One wide row per alert_name per veh_id per timestamp. This is a bit bulkier, but may make reporting easier. (Eg, "list all eng-temp alerts and include contributing metrics")
Implementing the tables
One approach is to receive a row, decide what info, if any, goes into each table, and doing the insert(s).
But that has a problem -- some decisions on what to put into a table depends on data both before and after. Also, you might choose to include, say, at least one row per day in Alerts even if the vehicle is sitting idle for several days. (Doing such assures the reader that the vehicle was not lost, but is idle instead.)
Presumably a vehicle that is turned off is not sending any data?
Alerts can, and should, be inserted into promptly. As the table name implies, it should always be the source of things to investigate. A web page "reporting" on one vehicle in detail or all vehicles in summary will benefit from having a single source (this table) for all alerts, even those that happened seconds ago.
Waypoints, on the other hand is probably best handled by a separate process that digests Locations. And it might do such with a delay of an hour or day. If the vehicle is driving all day on one straight freeway, there is nothing to put in the table.
My discussion of Summary Tables
My discussion of handling Sensor data
Partition Maintenance (DROP+REORG) for time series (includes list of PARTITION uses)
Big DELETEs - how to optimize -- and other chunking advice, plus a use for PARTITIONing
Chunking lengthy DELETE/UPDATE/etc.
Data Warehouse techniques:
Data Warehouse Overview Summary Tables High speed ingestion Bulk Normalization
Schema and code design for large Sensor database
Entity-Attribute-Value (EAV) -- a common, poorly performing, design pattern; plus an alternative
5 methods for 'Find Nearest'
Lat/Lng search to Find the nearest 10 pizza parlors
Lat/Long representation choices
Z-Order 'find nearest'
Pagination, not with OFFSET, LIMIT
Techniques on efficiently finding a random row (On beyond ORDER BY RAND())
GUID/UUID Performance (type 1 only)
IP Range Table Performance -- or other disjoint ranges
Rollup Unique User Counts
Alter of a Huge table -- Mostly obviated by 5.6
Efficient List of Latest 10 news articles
Build and execute a Pivot SELECT (showing rows as columns)
(Groupwise Max): Efficiently find largest row(s) for each group
Other Tips, Tuning, Debugging, Optimizations, etc...
Rick's RoTs (Rules of Thumb -- lots of tips)
Datatypes and building a good schema
Memory Allocation (caching, etc)
Character Set and Collation problem solver
Trouble with UTF-8 If you want case folding, but accent sensitivity, please file a request at https://bugs.mysql.com .
Python tips, PHP tips, other language tips
utf8 Collations utf8mb4 Collations on 8.0
Converting from MyISAM to InnoDB -- includes differences between them
Compound INDEXes plus other insights into the mysteries of INDEXing
Cookbook for Creating Indexes
Many-to-many mapping table Handler counts wp_postmeta UNION+OFFSET
MySQL Limits -- built-in hard limits
767-byte INDEX limit
Galera, tips on converting to (Percona XtraDB Cluster, MariaDB 10, or manually installed)
5.7's Query Rewrite -- perhaps 5.7's best perf gain, at least for this forum's users
Analyze MySQL Performance
Analyze VARIABLEs and GLOBAL STATUS Analyze SlowLog
My slides from conferences
MiniFest 2021 - Rick James & Daniel Black - Answering on Stack Overflow(+comments) - MariaDB Frontlines
Percona Live 4/2017 - Rick's RoTs (Rules of Thumb) - MySQL/MariaDB
Percona Live 4/2017 - Index Cookbook - MySQL/MariaDB
Percona Live 9/2015 - PARTITIONing - MySQL/MariaDB
Contact me via LinkedIn; be sure to include a brief teaser in the Invite request: