I know this has been discussed before but I thought I'd resurrect it. Automated cleaning up of bad data from the datalogs would make life simpler
I get the odd rogue values in the datalogs, e.g. 35894 rpm or 11532.3 degrees of spark advance and I currently manually remove these from the .csv file. This is fine for viewing in Excel but does not help with viewing the history tables in TP.
Could TP be made to filter out data packets where a value doesn't conform to a specified filter range? Taking rpm as an example; if you set the filter range to 0 rpm minimum, 7400 rpm maximum then TP would discard any data packets that have rpm values outside of these ranges. If either or both filter ranges were left empty then TP would ignore it/them.
No ETA. What I'm planning on implementing is the ability to filter data that gets fed to history tables. I would not reject packets based on filtered values.
-Mark
***************************************
TunerPro Author
1989 Trans Am
TBH, I have no idea and I am assuming it's electrical interference causing glitches in the data stream. There's no apparent pattern to them; sometimes they are seconds apart, other times minutes apart.
I found a way to do this as is.
Take the data and pass it through a lookup table with min and max values assigned.
That corrected my economy values so they would not exceed 50 MPG when a zero reading would cause chaos with the math. Same with no data packets causing real high numbers.
To do this make a lookup table called "Limits"
Set the min value of 0=0, max value to 50 =50
Send the output of the ALDL data conversion to the lookup table and it cleans it up nicely.
Keeps the value from going negative and limits the upper end to 50.
Enjoy
I'm not really following what you are doing there but I'll see if I can figure it out!
There's also something else about the rogue data which is odd, the bad data always seems to be an increase in the value rather than a mix of increased and decreased values
Basically by passing the data through a "gated window" of 0-50, the values can not go outside those limits.
You can create several limit lookups for different ranges if desired.
Some of the calculated values can (and will) generate values with characters in them if there is a null value. They read as large scale negative values.
This also eliminates those types of issues.
Doing this makes the graph views stay readable and not have one point go off in the distance.
OK, I see how to do this now but unfortunately it does not help. For example, if I have a bad rpm value, say 25k, this then gets clipped to 7.5k which is still wrong and generates a spike.
The only way I can see to achieve the datalog clean up is to discard any data packets that exceed preset limits.